System Environment: -------------------------- Arch: i386 Platform: Pineview Libdrm: (master)libdrm-2.4.39-5-gb925022a3e4616665b388a78abab4e3270b4b4ec Mesa: (master)23cd6c43da6fb1ff89b994664df2658a7929402e Xserver:(master)xorg-server-1.13.0 Xf86_video_intel:(master)2.20.7-4-gbc73acbd4da53bc50752c0413adcd0ce876e0a03 Libva: (staging)fe623b5d424dbb97763b0aa6e80376b35a214595 Libva_intel_driver:(staging)f842b5021acbf093572a1bd0f86d32ff3e08621e Kernel: (drm-intel-nightly) a1ef61717033fa7bf413b59a410a74fc66ee2b9a Bug detailed description: ------------------------- It fails on It fails on Pineview with mesa master branch.It doesn't happen on mesa 9.0 branch. The last known good commit:9a31e090efb15ec34e7a1a5e707d600a11d74925 The last known bad commit: 23cd6c43da6fb1ff89b994664df2658a7929402e output: Probe at (125,0) Expected: 0.000000 0.000000 0.000000 0.000000 Observed: 1.000000 1.000000 1.000000 1.000000 PIGLIT: {'result': 'fail' } Reproduce steps: ---------------------------- 1. xinit 2. ./bin/shader_runner tests/spec/glsl-1.20/execution/clipping/vs-clip-vertex-const-reject.shader_test -auto -fbo
Bisect shows: f73ffacbf0c65ad843406af37aa35e9112bc8038 is the first bad commit commit f73ffacbf0c65ad843406af37aa35e9112bc8038 Author: Brian Paul <brianp@vmware.com> AuthorDate: Tue Sep 4 10:02:20 2012 -0600 Commit: Brian Paul <brianp@vmware.com> CommitDate: Tue Sep 4 11:36:58 2012 -0600 mesa: fix DIFFERENT_SIGNS() function Looks like converting this to a macro, returning bool, caused us to lose the high (31st) bit result. Fixes piglit fbo-1d test. Strange that none of the other tests I ran caught this. Fixes https://bugs.freedesktop.org/show_bug.cgi?id=54365 Tested-by: Vinson Lee <vlee@freedesktop.org>
This open-coded macro is just getting annoying. Matt, Can you put together an autoconf check for the signbit function and change IS_NEGATIVE to use that? #ifdef HAVE_SIGNBIT #define IS_NEGATIVE(x) (signbit(x) != 0) #define DIFFERENT_SIGN(x, y) (IS_NEGATIVE(x) == IS_NEGATIVE(y)) #else ... #endif
(In reply to comment #2) > This open-coded macro is just getting annoying. > > Matt, > > Can you put together an autoconf check for the signbit function and change > IS_NEGATIVE to use that? > > #ifdef HAVE_SIGNBIT > #define IS_NEGATIVE(x) (signbit(x) != 0) > #define DIFFERENT_SIGN(x, y) (IS_NEGATIVE(x) == IS_NEGATIVE(y)) > #else > ... > #endif Patch sent: http://lists.freedesktop.org/archives/mesa-dev/2012-September/027466.html First I want to attempt to use signbit() everywhere if we can. If we can't, we can add the #ifdef spaghetti.
(In reply to comment #3) > (In reply to comment #2) > > This open-coded macro is just getting annoying. > > > > Matt, > > > > Can you put together an autoconf check for the signbit function and change > > IS_NEGATIVE to use that? > > > > #ifdef HAVE_SIGNBIT > > #define IS_NEGATIVE(x) (signbit(x) != 0) > > #define DIFFERENT_SIGN(x, y) (IS_NEGATIVE(x) == IS_NEGATIVE(y)) > > #else > > ... > > #endif > > Patch sent: > http://lists.freedesktop.org/archives/mesa-dev/2012-September/027466.html > > First I want to attempt to use signbit() everywhere if we can. If we can't, > we can add the #ifdef spaghetti. This patch is now committed. lu, does it fix the test on Pineview? http://cgit.freedesktop.org/mesa/mesa/commit/?id=0f3ba405eada72e1ab4371948315b28608903927
It still fails on master branch commit 60e610e0426d79fc9080df1a9ae0a6be39af1380.
It still fails on latest mesa master and 9.1 branch.
Created attachment 83075 [details] [review] Debugging code to analyze the two DIFFERING_SIGN macro implementations I did a bunch of analysis on this tonight, and discovered the problem: GLboolean. Let's compare 1.0 and -1.0: The new implementation of DIFFERING_SIGNS gives: signbit(1.0) != signbit(-1.0) 0 != 8 1 While the old implementation gives: (0x3f800000 ^ 0xbf800000) & (1u << 31) 0x80000000 & (1u << 31) 0x80000000 0 <---- (WTF?!?) The critical realization is that this macro returns a GLboolean, which is a typedef for "unsigned char". 0x80000000 is too big to fit in an unsigned char, so the high bits are simply lost, resulting in 0...or...FALSE. So, the old function was indeed broken. Brian's patch to add !! correctly fixed it (forcing it to 0/1 before truncating). Matt's patch simplified it. The question then is: what's wrong with the TNL clipping code that it breaks on for -1.0/1.0? Changing DIFFERING_SIGNS on t_vb_cliptmp.h:48 to use the old macro is sufficient to make the test pass - it's not necessary to change IS_NEGATIVE or any other code.
Dylan Baker reminded me that since the old DIFFERENT_SIGNS macro would only return 0 or (1u << 31)...as a GLboolean...that meant it would always return false. Which means this TNL clipping code would never have executed until Brian fixed the bug...which is when the regression happened. Should the code be deleted?
All the DIFFERING_SIGN business is just a red herring, and this isn't a real regression. It was just dumb luck that vs-clip-vertex-const-reject.shader_test ever passed. I tried several of the other vs-clip-vertex-* tests pass. None of them pass because the clip code always uses gl_Position for clipping instead of gl_ClipVertex. I'm removing this from the blocker list and updating the summary.
As it turns out this wasn't a regression and its unlikely anyone is going to work on improving software TNL I'm going to close this for now.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.