Summary: | gl_ClipVertex support horribly broken with software TNL | ||
---|---|---|---|
Product: | Mesa | Reporter: | lu hua <huax.lu> |
Component: | Mesa core | Assignee: | marius predut <marius.predut> |
Status: | RESOLVED WONTFIX | QA Contact: | mesa-dev |
Severity: | major | ||
Priority: | high | CC: | brianp, chrisf, christophe.prigent, idr, xunx.fang |
Version: | git | ||
Hardware: | All | ||
OS: | Linux (All) | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: | Debugging code to analyze the two DIFFERING_SIGN macro implementations |
Description
lu hua
2012-09-12 08:22:42 UTC
Bisect shows: f73ffacbf0c65ad843406af37aa35e9112bc8038 is the first bad commit commit f73ffacbf0c65ad843406af37aa35e9112bc8038 Author: Brian Paul <brianp@vmware.com> AuthorDate: Tue Sep 4 10:02:20 2012 -0600 Commit: Brian Paul <brianp@vmware.com> CommitDate: Tue Sep 4 11:36:58 2012 -0600 mesa: fix DIFFERENT_SIGNS() function Looks like converting this to a macro, returning bool, caused us to lose the high (31st) bit result. Fixes piglit fbo-1d test. Strange that none of the other tests I ran caught this. Fixes https://bugs.freedesktop.org/show_bug.cgi?id=54365 Tested-by: Vinson Lee <vlee@freedesktop.org> This open-coded macro is just getting annoying. Matt, Can you put together an autoconf check for the signbit function and change IS_NEGATIVE to use that? #ifdef HAVE_SIGNBIT #define IS_NEGATIVE(x) (signbit(x) != 0) #define DIFFERENT_SIGN(x, y) (IS_NEGATIVE(x) == IS_NEGATIVE(y)) #else ... #endif (In reply to comment #2) > This open-coded macro is just getting annoying. > > Matt, > > Can you put together an autoconf check for the signbit function and change > IS_NEGATIVE to use that? > > #ifdef HAVE_SIGNBIT > #define IS_NEGATIVE(x) (signbit(x) != 0) > #define DIFFERENT_SIGN(x, y) (IS_NEGATIVE(x) == IS_NEGATIVE(y)) > #else > ... > #endif Patch sent: http://lists.freedesktop.org/archives/mesa-dev/2012-September/027466.html First I want to attempt to use signbit() everywhere if we can. If we can't, we can add the #ifdef spaghetti. (In reply to comment #3) > (In reply to comment #2) > > This open-coded macro is just getting annoying. > > > > Matt, > > > > Can you put together an autoconf check for the signbit function and change > > IS_NEGATIVE to use that? > > > > #ifdef HAVE_SIGNBIT > > #define IS_NEGATIVE(x) (signbit(x) != 0) > > #define DIFFERENT_SIGN(x, y) (IS_NEGATIVE(x) == IS_NEGATIVE(y)) > > #else > > ... > > #endif > > Patch sent: > http://lists.freedesktop.org/archives/mesa-dev/2012-September/027466.html > > First I want to attempt to use signbit() everywhere if we can. If we can't, > we can add the #ifdef spaghetti. This patch is now committed. lu, does it fix the test on Pineview? http://cgit.freedesktop.org/mesa/mesa/commit/?id=0f3ba405eada72e1ab4371948315b28608903927 It still fails on master branch commit 60e610e0426d79fc9080df1a9ae0a6be39af1380. It still fails on latest mesa master and 9.1 branch. Created attachment 83075 [details] [review] Debugging code to analyze the two DIFFERING_SIGN macro implementations I did a bunch of analysis on this tonight, and discovered the problem: GLboolean. Let's compare 1.0 and -1.0: The new implementation of DIFFERING_SIGNS gives: signbit(1.0) != signbit(-1.0) 0 != 8 1 While the old implementation gives: (0x3f800000 ^ 0xbf800000) & (1u << 31) 0x80000000 & (1u << 31) 0x80000000 0 <---- (WTF?!?) The critical realization is that this macro returns a GLboolean, which is a typedef for "unsigned char". 0x80000000 is too big to fit in an unsigned char, so the high bits are simply lost, resulting in 0...or...FALSE. So, the old function was indeed broken. Brian's patch to add !! correctly fixed it (forcing it to 0/1 before truncating). Matt's patch simplified it. The question then is: what's wrong with the TNL clipping code that it breaks on for -1.0/1.0? Changing DIFFERING_SIGNS on t_vb_cliptmp.h:48 to use the old macro is sufficient to make the test pass - it's not necessary to change IS_NEGATIVE or any other code. Dylan Baker reminded me that since the old DIFFERENT_SIGNS macro would only return 0 or (1u << 31)...as a GLboolean...that meant it would always return false. Which means this TNL clipping code would never have executed until Brian fixed the bug...which is when the regression happened. Should the code be deleted? All the DIFFERING_SIGN business is just a red herring, and this isn't a real regression. It was just dumb luck that vs-clip-vertex-const-reject.shader_test ever passed. I tried several of the other vs-clip-vertex-* tests pass. None of them pass because the clip code always uses gl_Position for clipping instead of gl_ClipVertex. I'm removing this from the blocker list and updating the summary. As it turns out this wasn't a regression and its unlikely anyone is going to work on improving software TNL I'm going to close this for now. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.