https://bugs.winehq.org/show_bug.cgi?id=47039
--- Comment #3 from Paul Gofman gofmanp@gmail.com --- Created attachment 64210 --> https://bugs.winehq.org/attachment.cgi?id=64210 Check GLSL version when using ARB_shader_bit_encoding with uvec4.
(In reply to Paul Gofman from comment #2)
Yes, looks like it. Could you please run glxinfo command and attach an output, and also compressed output of WINEDEBUG=+d3d?
I don't yet understand how we get ARB_SHADER_BIT_ENCODING extension enabled with GLSL version 1.2.
Never mind, I found capabilities report for this GPU and indeed it supports ARB_shader_bit_encoding and GLSL version 1.2 with Mesa. In the error log it recognizes uintBitsToFloat() and suggests uvec4 as possible argument type, but does not allow uvec4 in shader code (as uvec4 is available since GLSL 1.3). I will send a patch. Attaching a patch here in case anyone wants to test it.