I'm experiencing constant fps drops every few seconds with both intel and radeon cards (the latter with DRI_PRIME=1) on Source games, When using the intel card and INTEL_DEBUG=perf I see "recompiling fragment shader" messages during drops, see http://pastebin.com/5zV9uuRb This messages repeat every drop, and the games goes from the maximum 40fps for a couple of seconds to ~10 fps for 3/4 seconds. This happens with the radeon card too, and with GALLIUM_HUD=fps,buffer-wait-time I see the latter parameter increasing to around 5k during fps drops. This happens with Counter Strike: Global Offensive and Left 4 Dead 2 (only games available for testing), also when standing still in an empty black room. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) -- SANDYBRIDGE 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Whistler [Radeon HD 6630M/6650M/6750M/7670M/7690M] (rev ff) OS Archlinux x86-64 + Linux 3.18 Mesa git version 67105.0c7f895
Has your performance regressed? This fall there have been some compiler frontend improvements that allow e.g. more inlining to be done for some shaders, and as result compilation can take (in worst case even several times) longer than earlier. See bug 86140. Recompile messages come from the backend which is separate for AMD and Intel, i.e. you may need to file separate bugs for each (Intel one would be for "Drivers/DRI/i965" component). > This happens with Counter Strike: Global Offensive and Left 4 Dead 2 (only games available for testing), also when standing still in an empty black room. Please give detailed instructions, how one can reach an "empty black room" where Mesa will do constant shader recompiles. Preferably in single player / tutorial level which doesn't require hours of playing. (I haven't seen anything like that on HSW in those games.) Alternatively, you could provide Apitrace trace.
(In reply to Eero Tamminen from comment #1) > (I haven't seen anything like that on HSW in those games.) You won't - all of the reporter's recompiles are due to EXT_texture_swizzle or DEPTH_TEXTURE_MODE swizzling, which only happen on pre-Haswell.
I discovered a bug in the i965 driver: the precompile was guessing the texture swizzle incorrectly. These two patches should cut around 40% of the recompiles: http://lists.freedesktop.org/archives/mesa-dev/2014-December/073483.html http://lists.freedesktop.org/archives/mesa-dev/2014-December/073484.html
And it turns out we can eliminate the rest of them pretty easily too: http://lists.freedesktop.org/archives/mesa-dev/2014-December/073490.html http://lists.freedesktop.org/archives/mesa-dev/2014-December/073489.html With those four patches on top of Mesa master, I see no recompiles at all in CSGO on my Sandybridge.
(In reply to Eero Tamminen from comment #1) > Has your performance regressed? This fall there have been some compiler > frontend improvements that allow e.g. more inlining to be done for some > shaders, and as result compilation can take (in worst case even several > times) longer than earlier. See bug 86140. I've always seen lag with the Radeon card, although it seems to be slowly getting better since my first tests 12 months ago. I don't have numbers to support that claim though. > > Recompile messages come from the backend which is separate for AMD and > Intel, i.e. you may need to file separate bugs for each (Intel one would be > for "Drivers/DRI/i965" component). I've opened a generic bug report since this affects two different card vendors, and even if Intel has some specific bugs or performance issues, I suspect the problem is in the driver-independent code. > > > > This happens with Counter Strike: Global Offensive and Left 4 Dead 2 (only games available for testing), also when standing still in an empty black room. > > Please give detailed instructions, how one can reach an "empty black room" > where Mesa will do constant shader recompiles. Preferably in single player > / tutorial level which doesn't require hours of playing. Left 4 Dead 2 instructions: load a single player game, Dead Center map 1, load the game: as soon as the game starts (helicopter scene) you get a noticeable slowdown, probably due to map loading. From that point on, every few seconds you should get a noticeable lag every few seconds. You can get a bigger slowdown when meleeing zombies or the explosion about 30 seconds into the game. Note: this affects every map, and is constant in every point of the map. I written about the empty room just to point out that this happen even when there is nothing being rendered on the screen (apart from HUD). > > (I haven't seen anything like that on HSW in those games.) > > Alternatively, you could provide Apitrace trace. Will do.
Could you please also test with Ken's 4 patches. That will tell us if it was just the recompiles or if there's something else we should be looking for.
(In reply to Jason Ekstrand from comment #6) > Could you please also test with Ken's 4 patches. That will tell us if it > was just the recompiles or if there's something else we should be looking > for. So, with the patch I confirm I no longer get messages about EXT_texture_swizzle recompiles, but the lag and performance issues are still present. Here's an updated log output from the game and INTEL_DEBUG=perf: http://pastebin.com/1bp76x5e
Here's the apitrace with mesa-git and the 4 patches from comments #3 and #4: https://drive.google.com/file/d/0BwBQBTnr5Iv6WHBfeE50RUxvRUU/view?usp=sharing Warning: 290MB file, ~1GB uncompressed. The first 30 seconds are the game booting up and map loading, then there's about 1:40 of actual game trace.
I just checked L4D2 with mesa 10.3.2 (AMD Barts), and I see no such fps drops. Sure, in the beginning there are some hiccups, but once all shaders have been used at least once, everything is smooth.
(In reply to almos from comment #9) > I just checked L4D2 with mesa 10.3.2 (AMD Barts), and I see no such fps > drops. Sure, in the beginning there are some hiccups, but once all shaders > have been used at least once, everything is smooth. That's interesting... so I've decided to test 10.3.2 with CS and L4D2 and: Intel card: fps drops are still present constantly and with the same magnitude. Haven't tried Kenneth's patches as I'm having errors compiling vanilla mesa 10.3.2 from git on Archlinux, so I'm using the old upstream packages. Radeon card: no more fps drops. When standing still fps graph is flat, compared to mesa 10.4 or HEAD where fps graph describes a sine wave. Performance is generally bad (40% slower) probably due to r600 improvements in the past releases. For reference, mine is a AMD TURKS card.
After spending half a day bisecting, I don't think there's any real difference between Mesa 10.3.2 and master: the fps drops happen in both releases, although they seem to be more infrequent in 10.3.2 and almost predictable in master. Due to this behaviour it's hard to bisect and tell which commit introduced these performance issues. I'm still on square one.
(In reply to Stéphane Travostino from comment #11) > After spending half a day bisecting, I don't think there's any real > difference between Mesa 10.3.2 and master: the fps drops happen in both > releases, although they seem to be more infrequent in 10.3.2 and almost > predictable in master. > > Due to this behaviour it's hard to bisect and tell which commit introduced > these performance issues. > > I'm still on square one. Your comment is from early yesterday and Kenneth's patches landed in Mesa at the end of yesterday. Do your _Intel_SNB_ issues go away if you use Mesa from today?
Does it still happen with the Radeon card with a 3.19-rc kernel?
If you are using 3.18 kernel, you could also try the previous one (3.17.x). I have a similar problem on radeon (though it's TAHITI, so radeonsi) and found that it is a kernel regression. In my case, I can easily reproduce it by playing Minecraft - after loading a world, in first minute there will always be a series of 1-3s pauses. https://bugzilla.kernel.org/show_bug.cgi?id=90741
I tried l4d2 again with mesa 10.5-dev (git-1829f9c), and still nothing. Kernel is the same as before (3.17.7). Do I need to underclock my CPU to see the lag spikes?
Is it possible there's a weird interaction with PRIME? @almos, does your system have a muxless setup? My CPU isn't underclocked nor undervolted, and using the performance governor doesn't help in any way. Also, I had the same problem with 3.17.6 -- I'll soon try again with an updated mesa and Linux 3.19-rc
Status as of Linux 3.19-rc3, mesa HEAD e28f9d0 Resolution 1280x800 out of native 1600x900, all graphic detail at minimum. Radeon: min fps 20, avg fps 65, max fps 95 In game FPS averages 65 FPS, with no issues during the first minute of the game. After that a valley in the FPS chart of about 3 seconds @ 20 FPS, repeating throughout the game every 15 seconds, or so. FPS drops are independent of the scene complexity, as the same scene after the drop goes back to the average of 65 FPS. Intel: min fps 10, avg fps 45, max fps 75 Same as Radeon, although the FPS drops seems to occur immediately after the actual game starts. FPS drops happen faster than with radeon, about 5 seconds around 45 fps and 5 seconds around 20 fps, a constant up and down throughout the gameplay. No relation with scene complexity either. Truncated log: http://pastebin.com/xHsekJUD -- this is less than 1 minute of gameplay. I confirm I no longer get EXT_texture_swizzle messages with Intel. These values are from Left 4 Dead 2 Same effect with Counter Strike: Global Offensive, although the FPS values are lower probably due to the higher complexity of the map and textures.
OK from further experiments this bug does NOT ONLY affect Source games, but any 3d/OpenGL application. I've experienced the same issue with varying degrees of severity also on: - Sauerbraten: average FPS around 180, random quick <= 1 second drops to 40 FPS every 10 seconds, severely affecting gameplay (being a multiplayer FPS) - WebGL on both Chromium and Firefox, example http://brm.io/matter-js-demo/#stress While moving a box around I get FPS drops every few seconds, from 60 FPS to 40, resolving by themselves after a couple seconds. Hopefully with these "free" tests I can find someone else experiencing the same issue.
Maybe a wild guess, but what are the temperatures of the GPUs in your laptop? Is this an overheating issue?
Are you still getting INTEL_DEBUG=perf output from them? -> if you're still getting recompile messages, re-check you have latest Mesa -> if there are no perf warnings, check that: * your dmesg doesn't have any suspicious warnings * "top" output doesn't show things to be CPU limited and you having some background CPU / X hog occasionally stalling things for the foreground app (You need another machine to monitor this when running things at fullscreen)
(In reply to Hohahiu from comment #19) > Maybe a wild guess, but what are the temperatures of the GPUs in your > laptop? Is this an overheating issue? No, but GOOD NEWS i've managed to reproduce the same problem with glxgears + RADEON. Here's a self-explanatory screenshot with Gallium FPS HUD enabled: https://dl.dropboxusercontent.com/u/64733/Screenshot%20from%202015-01-20%2020%3A25%3A42.png I'm trying to reproduce the same thing with Intel, but it's V-synced and can't manage to have it run more than 59 FPS. One bizarre thing I've noticed is that no matter the complexity of the game the fan speed is relatively slow compared to Windows, where any AAA game makes my laptop sound like a jet engine. I'd empirically say it's running at 50%, where 0% is normal operation and 100% is jet-fighter loud. Hope this helps.
(In reply to Eero Tamminen from comment #20) > Are you still getting INTEL_DEBUG=perf output from them? > > -> if you're still getting recompile messages, re-check you have latest Mesa > > -> if there are no perf warnings, check that: > * your dmesg doesn't have any suspicious warnings > * "top" output doesn't show things to be CPU limited and you having some > background CPU / X hog occasionally stalling things for the foreground app > > (You need another machine to monitor this when running things at fullscreen) No warnings, no dmesg warnings whatsoever. Following up my latest update, here's my dmesg after running glxgears: http://pastebin.com/0zvVCXdB There are multiple power state switches as I ran glxgears w/PRIME multiple times in a row.
(In reply to Stéphane Travostino from comment #21) > I'm trying to reproduce the same thing with Intel, but it's V-synced and > can't manage to have it run more than 59 FPS. Have you tried 'vblank_mode=0 glxgears'?
Thanks Michel, yes I confirm I can reproduce the same FPS drops with glxgears, Intel and vsync disabled. No FPS HUD for Intel, but I can see the FPS numbers change between ~5.6k and 1.6k every 10 seconds on average.
(In reply to Stéphane Travostino from comment #24) > yes I confirm I can reproduce the same FPS drops with glxgears, Intel and > vsync disabled. So, it seems like something is causing the performance of your system as a whole to degrade at regular intervals. Does top show any additional CPU load while performance is degraded? Or does something like iotop or vmstat show I/O during those times? Does it also affect pure CPU applications, e.g. audio or video encoding / transcoding?
as root, mount debugfs and monitor CAGF (actual GPU frequency) value from "/sys/kernel/debug/dri/0/i915_frequency_info", or if you have older kernel from "/sys/kernel/debug/dri/0/i915_cur_delayinfo" Does it keep up, or go down when you get FPS drop? (If it goes down, this seems like kernel power management issue.)
(In reply to Eero Tamminen from comment #26) > as root, mount debugfs and monitor CAGF (actual GPU frequency) value from > "/sys/kernel/debug/dri/0/i915_frequency_info", or if you have older kernel > from "/sys/kernel/debug/dri/0/i915_cur_delayinfo" Does it keep up, or go > down when you get FPS drop? > > (If it goes down, this seems like kernel power management issue.) Good call! Watching i915_frequency_info while running glxgears on Intel, I see it starts at 1100 MHz and after a few seconds it starts to drop to 650 MHz and up again, repeatedly. Frequency changes collerate with FPS changes in glxgears, so yeah, it seems it is a kernel pm issue. I tried booting up with i915.powersave=0 i915.enable_rc6=0, also tried disabling Runtime PM on the Intel card via powertop but I can't find a way to force the frequency to max values to be 100% sure it's frequency switching related. Any idea where to go from there?
Forgot to specify that that the min/max frequency of my intel card are 650/1200 MHz
You need to do one more test. if you'll echo max freq value to /sys/kernel/debug/dri/0/i915_min_freq, kernel will not drop GPU frequency. If that gets rid of the issue, it's kernel PM issue. If the CAGF value still goes below max GPU frequency, you got issues outside of what Linux can control -> the CPU speed gets limited by HW / firmware, potentially because of temperature issues. Temperature you can track with lmsensors. If frequency gets limited when temperature reaches a certain limit, you need to make sure your CPU is better cooled. Updating BIOS may also help, I think newer BIOS versions try to keep fluctuations smaller (drop freq less, but earlier).
Solved! Yes, forcing the min/max i915 freq didn't stop the GPU from going to the low frequency by itself, so as you say it's something outside the control of the kernel. This machine is a Sony Vaio VPCSA series, and has a "sony_laptop" module to control keyboard backlight, and.. thermal control. By default "/sys/devices/platform/sony-laptop/thermal_control" is set to "balanced", changing it to "performance" I get: - Stable FPS on both Intel & Radeon - Intel CAGF frequency stable on max when running intensive OpenGL operations - No more FPS drops Thanks everybody for the help troubleshooting this, marking this as NOTABUG.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.