System Environment: -------------------------- Arch: x86_64 Platform: Haswell Libdrm: (master)libdrm-2.4.40-6-g891517f5111cd82909906d5e8ee0299db0d46762 Mesa: (master)a60c567fcf29f5d2a41222a8826fee2cb0eb4458 Xserver: (master)xorg-server-1.13.99.901 Xf86_video_intel:(master)2.20.17-57-g8881a14200580db731ca6902b289b08989aaa61e Cairo: (master)5f2e89660d5e38d8e2682945962521958f150825 Libva: (staging)2e11d2273b2974a7d1959cbcaf8db5b8e9aedd9e Libva_intel_driver:(staging)066c9f6532b11e1e3d0457520dea565cd29faea7 Kernel: (drm-intel-nightly) 52d697693f8e5bebd9c469dff214f608438bd6b8 Bug detailed description: ------------------------- It fails on haswell with mesa master branch. Works well on ivybridge. Bisect shows:a60c567fcf29f5d2a41222a8826fee2cb0eb4458 is the first bad commit. commit a60c567fcf29f5d2a41222a8826fee2cb0eb4458 Author: Kenneth Graunke <kenneth@whitecape.org> AuthorDate: Fri Jan 4 07:53:12 2013 -0800 Commit: Kenneth Graunke <kenneth@whitecape.org> CommitDate: Mon Jan 7 16:48:02 2013 -0800 i965: Support GL_FIXED and packed vertex formats natively on Haswell+. Haswell and later support the GL_FIXED and 2_10_10_10_rev vertex formats natively, and don't need shader workarounds. Reviewed-by: Eric Anholt <eric@anholt.net> output: Int vertices - 2/10/10/10 Unsigned Int vertices - 2/10/10/10 Int Color - 2/10/10/10 Probe at (45,5) Expected: 1.000000 0.000000 0.000000 0.333000 Observed: 1.000000 0.000000 0.000000 0.000000 Unsigned Int Color - 2/10/10/10 Int BGRA Color - 2/10/10/10 Probe at (85,5) Expected: 0.000000 0.000000 1.000000 0.333000 Observed: 0.000000 0.000000 1.000000 0.000000 Unsigned Int BGRA Color - 2/10/10/10 Int 2/10/10/10 - test ABI Unsigned 2/10/10/10 - test ABI PIGLIT: {'result': 'fail' } Reproduce steps: ---------------- 1. start X 2. ./bin/draw-vertices-2101010 -auto
Hmm. This seems very believable, but I just retested it and it's working fine on my Haswell machine...
Hua, does it happen on all our HSW?
(In reply to comment #2) > Hua, does it happen on all our HSW? Yes, It still fails.
Fails on my hsw -- I assume it's an issue related to the equation differences in bf75a1f09
Okay, I can reproduce this finally. Not sure why I couldn't earlier. At least this means both Haswell and Bay Trail follow the same rules.
Ken, You did some triage on this, and there was some discussion on IRC. What was the outcome?
It looks like the test just needs to be updated to accept either decoding of 10-bit floats.
Since this is a test error, I'm decreasing the severity and removing from the release tracker.
Fixed by piglit commit: commit 35daaa1695ea01eb85bc02f9be9b6ebd1a7113a1 Author: Kenneth Graunke <kenneth@whitecape.org> Date: Mon Dec 25 21:10:16 2017 -0800 draw-vertices-2101010: Accept either SNORM conversion formula. OpenGL defines two equations for converting from signed-normalized to floating point data. These are: f = (2c + 1)/(2^b - 1) (equation 2.2) f = max{c/2^(b-1) - 1), -1.0} (equation 2.3) ARB_vertex_type_2_10_10_10_rev specifies equation 2.2 is to be used. However, OpenGL 4.2 switched to use equation 2.3 in all scenarios. This matched an earlier OpenGL ES 3.0 decision to only have one formula, as well as a DirectX decision to change to equation 2.3. Some hardware also only supports equation 2.3. So, basically no one can rely on equation 2.2 happening, and some people do rely on 2.3. This patch continues to require equation 2.3 for GL 4.2+, but relaxes the test to allow either behavior for earlier GL versions. See the following discussion for more details: https://lists.freedesktop.org/archives/mesa-dev/2013-August/042680.html This makes this test pass on i965 with Haswell and later. Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.