RGBA32F and RGBA32I variants of the test work fine. Very odd issue. See the results.
This is hit by dEQP-GLES31.functional.texture.border_clamp.formats.rgba32ui.nearest_size_pot which needs my recent patches to run, in addition to some other things. If you're interested in running this yourself, let me know and I can provide the various instructions.
Attached are the error and expected images. Note that it seems like the border color "eventually" works out, but near the image it's white.
Created attachment 121815 [details]
what the image should look like
Created attachment 121816 [details]
what the image looks like
Should be noted that the broken one:
Border color is (1288490240, 3006477056, 858993472, 2147483648)
While there's a working one in the log:
Border color is (3865470464, 858993472, 1717986944, 2576980480)
Note sure if it hates the 0x80000000 value. Or if it's based on the position of the rendered image. (The one that passes is in the upper-right corner.) The image itself is 32x16.
This is pretty strange. Overriding the texturing format from R32G32B32A32_UINT to R32G32B32A32_SINT doesn't change the results at all - it still fails in the same manner. Changing it to FLOAT works though.
Which is weird, since the rgba32i test works...
This test is passing on my HSW, has this bug been fixed?
i965 CI shows that this test was fixed for HSW by deqp:
FWIW, here is the commit message that was merged:
Author: Nicolas Capens <email@example.com>
AuthorDate: Mon Jan 8 16:49:05 2018 -0500
Fix using representable texture channel ranges.
A value of 4294967295 (2^32 - 1) is not exactly representable in
IEEE-754 single-precision floating-point format, and so it gets rounded
to the next representable value, which is 4294967296. However, this
value can't be cast to an unsigned 32-bit integer without overflowing.
GLSL does not define what happens on overflow, and IEEE-754 defines it
as an exception but GLSL forbids exceptions. Hence some implementations
may produce unexpected results. dEQP assumed clamping to the largest
This change fixes that false assumption by reducing the range to values
representable in both float and integer formats.
Note that 32-bit integer formats can still hold values slightly larger
than these ranges. So while previously the floating-point ranges were
too large to represent integer values, they are now too small. This
can't be fixed without separating the integer format tests and only
using integer values to represent their ranges. This doesn't appear
necessary for the time being since the tests that use these floating-
point ranges have large 12/256 tolerances for the output color.
Google bug: 70910885