The following tests fail on 32 bit Mesa:
ERROR/packed_pixels: ERROR:packed_pixels:Compare: non-integer comparison: 4, 0, 72, 0.007874: 0.566929 == 0.500000: not equal.
Additionally, the following test fails intermittently, and has been disabled in Intel's CI:
INFO/packed_depth_stencil_init: INFO:packed_depth_stencil_init:Target: 00009101
ERROR:packed_depth_stencil_init:glGetTexLevelParameterfv: Parameter is not 0
This basically boils down to:
_mesa_lroundevenf(0.571428597f * 0xffffffffu)
returning wildly different results on 64-bit and 32-bit.
64-bit: result = 0x92492500
32-bit: result = 0x80000000
This happens even if I use the lrintf path directly instead of the SSE intrinsics.
How to arrive at this conclusion:
1. Simplify the test to do less work:
a. Comment out all "formats" other than GL_RED
b. Comment out all "types" other than GL_SHORT and GL_UNSIGNED_INT
3. Break in get_tex_rgba_uncompressed
We should be doing a R16_SNORM -> R32_UNORM conversion here
4. Note the conversion to RGBA32_FLOAT for transfer ops
5. Observe that texgetimage.c:546 _mesa_format_convert has the same float source data in both 32-bit and 64-bit builds, but produces different R32_UNORM result data.
Oh right, because that's larger than a signed long (32-bit).
llrintf() works of course. So perhaps _mesa_float_to_unorm needs to be using llrintf() if dst_bits == 32?
Fixed by this:
Author: Kenneth Graunke <firstname.lastname@example.org>
Date: Fri Aug 23 11:10:30 2019 -0700
mesa: Fix _mesa_float_to_unorm() on 32-bit systems.