Summary: | Strange slow-fast performance latching between gfxbench3 (and 4) test runs | ||||||
---|---|---|---|---|---|---|---|
Product: | xorg | Reporter: | Tvrtko Ursulin <tvrtko.ursulin> | ||||
Component: | Driver/intel | Assignee: | Chris Wilson <chris> | ||||
Status: | RESOLVED MOVED | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> | ||||
Severity: | normal | ||||||
Priority: | medium | CC: | eero.t.tamminen, mattst88, tvrtko.ursulin | ||||
Version: | git | ||||||
Hardware: | x86-64 (AMD64) | ||||||
OS: | Linux (All) | ||||||
Whiteboard: | |||||||
i915 platform: | i915 features: | ||||||
Attachments: |
|
Description
Tvrtko Ursulin
2017-10-17 17:53:05 UTC
Created attachment 134902 [details] [review] Only use staged uploads for the same batch. An idea to cut out the flip flops on staging uploads. (In reply to Chris Wilson from comment #1) > Created attachment 134902 [details] [review] [review] > Only use staged uploads for the same batch. > > An idea to cut out the flip flops on staging uploads. No apparent effect in testing with this one. Btw, should I be seeing "Using a blit copy to avoid stalling on..." messages since I have INTEL_DEBUG=perf turned on? Or there is some other path wo/ perf_debug which does blitter uploads as well? (In reply to Tvrtko Ursulin from comment #3) > Btw, should I be seeing "Using a blit copy to avoid stalling on..." messages > since I have INTEL_DEBUG=perf turned on? Or there is some other path wo/ > perf_debug which does blitter uploads as well? Yes... And you still see high BCS usage on master? That too shouldn't happen for brw_blorp_copy_buffers, so another indication of barking up the wrong tree. Let's see if we can perf_debug() the switch from RCS to BCS. Tvrtko, do you see the same issue also with the offscreen version of the test? Benchmarks shouldn't normally be doing uploads after test startup, unless its benchmark for texture upload. The only thing that I've seen using a lot of blitter during test run-time is X server, when it does copy of the non-vsynched frame. This would be most visible when using Intel DDX with DRI2. What X server and X driver you're using? Intel DDX one, or modesetting? If former, do you use DRI2 or DRI3? (LIBGL_DEBUG=verbose should output whether Mesa uses DRI2 or DRI3.) On that machine I have the Intel DDX with DRI 3 turned on, and Mesa confirms it is using DRI 3. Offscreen version of the test does not seem to suffer from this problem. So I guess user error of some sort? (In reply to Tvrtko Ursulin from comment #6) > On that machine I have the Intel DDX with DRI 3 turned on, and Mesa confirms > it is using DRI 3. > > Offscreen version of the test does not seem to suffer from this problem. > > So I guess user error of some sort? I think the dual results issue we've discussed is still real. You could test also with modesetting, to make sure blitter usage really goes away, and if yes, whether that makes the performance results more consistent (mostly/partly CPU bound tests like gl_driver are still going to have at least 5x more variance than GPU bound tests have). Even if the cause would be Intel DDX instead of Mesa, it's quite suspicious that it would randomly use blitter for frame copies. Can't repro with modesetting. Let's see if I can move the bug to xorg/driver/intel.. -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/driver/xf86-video-intel/issues/150. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.