Bug 43000 - huge performance regression in ut2004 since 7.11
Summary: huge performance regression in ut2004 since 7.11
Status: RESOLVED NOTABUG
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/Gallium/r600 (show other bugs)
Version: git
Hardware: Other All
: medium normal
Assignee: Default DRI bug account
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-11-16 11:37 UTC by almos
Modified: 2011-12-06 09:19 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments

Description almos 2011-11-16 11:37:52 UTC
With 7.11 I get 60fps during the nvidia logo and in the menu. Ingame it is e.g. ~44fps if I load ons-torlan and look at the central tower from the base.

With 7.12-dev (git-b618e78) I get <30fps during the nvidia logo, and ~6fps on the same level.

I must add, that 7.11 isn't quite playable either, because the fps has very high variance: it jumps between 20 and 60, which makes the game very laggy.
Comment 1 Alex Deucher 2011-11-16 11:42:52 UTC
What hardware are you using?  Is mesa the only part that changed or did you also update your kernel and/or ddx?  If it's just mesa, can you bisect?  If it's multiple parts that you upgraded can you track down what component caused the problem?
Comment 2 almos 2011-11-16 11:52:30 UTC
The hw is barts pro (hd6850). The only part changed is mesa: 7.11 is installed (debian unstable), and I compiled one from git. In the latter case I start programs as
LD_LIBRARY_PATH=/home/almos/SRC/mesa/lib/ LIBGL_DRIVERS_PATH=/home/almos/SRC/mesa/lib/gallium "$@"

I'll try to bisect later.
Comment 3 Ian Romanick 2011-11-16 14:25:23 UTC
If this was a recent change, I'll guess that it will bisect to my changes to the way uniforms are handled.  I pushed a patch today that may restore previous performance:

commit 010dc29283cfc7791a29ba8a0570d8f7f9edef05
Author: Ian Romanick <ian.d.romanick@intel.com>
Date:   Thu Nov 10 12:32:35 2011 -0800

    mesa: Only update sampler uniforms that are used by the shader stage
    
    Previously a vertex shader that used no samplers would get updated (by
    calling the driver's ProgramStringNotify) when a sampler in the
    fragment shader was updated.  This was discovered while investigating
    some spurious code generation for shaders in Cogs.  The behavior in
    Cogs is especially pessimal because it ping-pongs sampler uniform
    settings:
    
        glUniform1i(sampler1, 0);
        glUniform1i(sampler2, 1);
        draw();
        glUniform1i(sampler1, 1);
        glUniform1i(sampler2, 0);
        draw();
        glUniform1i(sampler1, 0);
        glUniform1i(sampler2, 1);
        draw();
        // etc.
    
    ProgramStringNotify is still too big of a hammer.  Applications like
    Cogs will still defeat the shader cache.  A lighter-weight mechanism
    that can work with the shader cache is needed.  However, this patch at
    least restores the previous behavior.
    
    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Comment 4 almos 2011-11-18 05:25:58 UTC
OK, now I found out the reason: I still haven't got used to my new 64bit system. ut2004 is 32 bit, and when I set LIBGL_DRIVERS_PATH libGL.so reverts to indirect rendering. Ouch.

What's worse, crosscompiling to 32bit is currently FUBAR on debian. The linker doesn't find /usr/lib32/libstdc++.so.6, I have to manually symlink it to /usr/lib32/libstdc++.so. The things in ia32-libs package are ancient (like mesa 7.7, I had to get a 7.11 from the libgl1-mesa-dri i386 package). I couldn't compile wine, because something is wrong with unistd_32.h, and compiling mesa stops with this:

/usr/bin/ld: skipping incompatible ../../auxiliary//libgallium.a when searching for -lgallium
/usr/bin/ld: cannot find -lgallium
collect2: ld returned 1 exit status

I can't imagine what the problem could be, because all the object files in that archive are 32bit, so they should be compatible.

Back to the topic: I turned off SwapbuffersWait, which raised the fps with mesa 7.11, but the high vairance remained. I'll try compiling a 32bit version of current mesa git somehow, and see if this problem still exists.
Comment 5 almos 2011-11-18 07:00:34 UTC
Now I compiled a 32bit r600g from mesa git on a 32 bit machine. The ad hoc benchmark results now with swapbufferwait disabled:
7.11: nvidia logo 200-300fps, menu 70-200fps, ons-torlan looking at tower 20-80 avg 50fps (all with high variance)
7.12-dev git-08b288b: nvidia logo 60fps (capped at refresh rate??), menu 58fps, ons-torlan looking at tower 20-50 avg 30fps (high variance, but periodically sticks to 20fps for a couple of seconds)

I can't do a proper benchmark run, as it needs a run config file, and the config reader code segfaults :(
Comment 6 Michel Dänzer 2011-11-18 07:25:27 UTC
(In reply to comment #5)
> 7.12-dev git-08b288b: nvidia logo 60fps (capped at refresh rate??)

If so, the environment variable vblank_mode=0 should disable it.
Comment 7 almos 2011-11-18 07:51:26 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > 7.12-dev git-08b288b: nvidia logo 60fps (capped at refresh rate??)
> 
> If so, the environment variable vblank_mode=0 should disable it.

Numbers for 7.12-dev with vblank_mode=0:
nvidia logo 100-300fps, menu 74-120fps, torlan 25-94 avg 57fps

Now the performance thing seems solved. Should I rename this report or start a new one for the laggyness due to the high variance of fps?
Comment 8 Michel Dänzer 2011-12-06 09:19:44 UTC
(In reply to comment #7)
> Now the performance thing seems solved.

Great, resolving.

> Should I rename this report or start a new one for the laggyness due to the
> high variance of fps?

The latter, if anything.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.