Summary: | [SNB+] igt/gem_exec_big causes more than 10 minutes | ||||||
---|---|---|---|---|---|---|---|
Product: | DRI | Reporter: | lu hua <huax.lu> | ||||
Component: | DRM/Intel | Assignee: | Intel GFX Bugs mailing list <intel-gfx-bugs> | ||||
Status: | CLOSED FIXED | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> | ||||
Severity: | normal | ||||||
Priority: | lowest | CC: | intel-gfx-bugs | ||||
Version: | unspecified | ||||||
Hardware: | x86 (IA32) | ||||||
OS: | Linux (All) | ||||||
Whiteboard: | |||||||
i915 platform: | i915 features: | ||||||
Attachments: |
|
Description
lu hua
2015-01-12 02:53:16 UTC
The extended test case takes longer to run - because the cmd parser is insane. To put it into perspective, I expect this test to run in under 30s on i7-3720QM $ time sudo ./gem_exec_big IGT-Version: 1.9-g86a5cb0 (x86_64) (Linux: 3.19.0-rc3+ x86_64) real 0m29.710s user 0m1.466s sys 0m27.988s And for reference, on a byt, $ time sudo ./gem_exec_big IGT-Version: 1.9-g2da7602 (x86_64) (Linux: 3.19.0-rc4+ x86_64) real 0m48.276s user 0m1.910s sys 0m45.320s Chris, should we just close this then? or does your kernel in test has patches? I am currently testing two patches to speed up the cmd-parser and relocation processing. Test on the latest igt and -nightly kernel, it still takes more than 10 minutes. [root@x-ivb6 tests]# time ./gem_exec_big IGT-Version: 1.9-g5fb26d1 (x86_64) (Linux: 3.19.0-rc4_drm-intel-nightly_823e71_20150113+ x86_64) ^C real 21m30.228s user 0m0.477s sys 21m18.794s root@x-byt05:/GFX/Test/Intel_gpu_tools/intel-gpu-tools/tests# time ./gem_exec_big IGT-Version: 1.9-g5fb26d1 (x86_64) (Linux: 3.19.0-rc4_drm-intel-nightly_823e71_20150113+ x86_64) ^C real 12m52.055s user 0m0.203s sys 12m32.804s It impacts SNB+ platforms Chris, can you please push out a branch for QA to test? Also plese reply to the relevant patches with the improvements, current commit message on intel-gfx is lacking that still. /me using bz because no direct smtp here And one for QA: Isn't this a regression? (In reply to Daniel Vetter from comment #8) > Chris, can you please push out a branch for QA to test? Also plese reply to > the relevant patches with the improvements, current commit message on > intel-gfx is lacking that still. Because the dramatic improvement is only in an artificial testcase. It's hard to measure any improvement with mesa as the relocation is dwarfed by userspace overheads, and we typically skip the relocation write itself. > And one for QA: Isn't this a regression? About 2.6.38? for the introduction of the obj->pages sglist and whenever you enabled the cmd parser. commit 17cabf571e50677d980e9ab2a43c5f11213003ae Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Jan 14 11:20:57 2015 +0000 drm/i915: Trim the command parser allocations I test it on HSW and BDW, it causes system hang, I am not sure this issue fixed or not. keep it open until bug 89390 fixed. *** Bug 89390 has been marked as a duplicate of this bug. *** (In reply to Daniel Vetter from comment #10) > commit 17cabf571e50677d980e9ab2a43c5f11213003ae > Author: Chris Wilson <chris@chris-wilson.co.uk> > Date: Wed Jan 14 11:20:57 2015 +0000 > > drm/i915: Trim the command parser allocations That was only half the patches. commit 4308378c7f85750a2f5fe0661cee48b35aff50b1 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Apr 7 16:20:25 2015 +0100 drm/i915: Cache last obj->pages location for i915_gem_object_get_page() Tested it on IVB and BYT with the latest kernel(430837),this issue does not exists. Verified it. output: ------------------ [root@x-ivb9 tests]# ./gem_exec_big IGT-Version: 1.10-gd9a25af (x86_64) (Linux: 4.0.0-rc6_kcloud_430837_20150409+ x86_64) SUCCESS (15.158s) root@x-byt06:/GFX/Test/Intel_gpu_tools/intel-gpu-tools/tests# ./gem_exec_big IGT-Version: 1.10-gd9a25af (x86_64) (Linux: 4.0.0-rc6_kcloud_430837_20150409+ x86_64) SUCCESS (35.117s) (In reply to lu hua from comment #7) > It impacts SNB+ platforms edit title. Closing old verified. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.