Bug 29835 - [regression] Post-GLSL2 Mesa culls too much by default.
Summary: [regression] Post-GLSL2 Mesa culls too much by default.
Status: RESOLVED FIXED
Alias: None
Product: Mesa
Classification: Unclassified
Component: Mesa core (show other bugs)
Version: git
Hardware: x86 (IA32) Linux (All)
: medium major
Assignee: mesa-dev
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2010-08-27 03:41 UTC by Cedric Vivier
Modified: 2010-08-31 15:44 UTC (History)
1 user (show)

See Also:
i915 platform:
i915 features:


Attachments

Description Cedric Vivier 2010-08-27 03:41:03 UTC
I've recently seen weird rendering results on one app I'm working on.

After investigation it seems Mesa now optimizes/culls too many triangles by default, even though GL_CULL_FACE is disabled.
This has very visible rendering consequences when using 'discard' in fragment shader for instance.

With the help of ShaderMaker I could isolate a simple testcase for this :

Applying an RGBA texture with an alpha=0.0 region and a fragment shader like :

--
uniform sampler2D tex0;

void main()
{
    vec4 col = texture2D(tex0, gl_TexCoord[0].xy);
    if (col.w == 0.0f)
        discard;
    gl_FragColor = col;
}
--

Some (but not all interestingly) of the back-facing polygons are not drawn anymore as they should (GL_CULL_FACE disabled + discard).


Screnshot before (pre-GLSL2 merge... and with NVidia driver) :
http://neonux.com/webgl/before.png

Screenshot after (post-GLSL2) :
http://neonux.com/webgl/after.png


(texture used is http://neonux.com/webgl/128-40-RGBA.png if you'd like to do same quick testing).


I tried to bisect the issue but could not do much as there was a lot of build failures in the period of time the regression happened it seems.

However my last known good commit is 15a3b42e135a3a2cb463ec3cff80a55dd8528051 (just before GLSL2 merge), my first known bad commit is 279aeebff5d5dcdad89f513f5727fc545ec96039, but it's likely that the regression happened somewhere IN the glsl2 branch.
Comment 1 Eric Anholt 2010-08-27 15:45:54 UTC
We've got some discard tests that look a lot like that, and they pass on swrast and 965.  What hardware are you using?  If you can reproduce the problem with swrast, could you make a piglit shader_runner testcase for your particular failure?
Comment 2 Cedric Vivier 2010-08-27 21:34:42 UTC
I've tested on both i965 and swrast, it gives same result.

Eric, which discard tests are you specifically talking about ?

In case the report is confusing, 'discard' itself is not the issue here, it's just an easily reproducible mean to notice that recent Mesa is culling/optimizing out some (not all) back-facing (and only back-facing) polygons that would have been over-drawn if there were no 'discard'.
Comment 3 Eric Anholt 2010-08-31 15:44:34 UTC
Nope, it really was about discard.  We weren't telling swrast and 965 that a pixel kill happened, so depth writes went ahead and then those backfaces would get culled (sometimes, depending on the order the triangles were processed).

piglit:
commit 99c3ae0b4889af94b98fd8722e4b501764741263
Author: Eric Anholt <eric@anholt.net>
Date:   Tue Aug 31 12:36:31 2010 -0700

    glsl-fs-discard-02: Test that early depth writes don't happen with "discard"

mesa:
commit 9b075cb9fa9eb6a95d0816283ef01ae72dafa680
Author: Eric Anholt <eric@anholt.net>
Date:   Tue Aug 31 13:02:59 2010 -0700

    ir_to_mesa: When emitting a pixel kill, flag that we did so.
    
    Both i965 and swrast rely on UsesKill to determine whether to do early
    depth writes.  Fixes glsl-fs-discard-02.
    
    Bug #29835.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.