Summary: | [CI][SHARDS] igt@gem_exec_nop@basic-parallel - incomplete - GEM_BUG_ON(!engine->i915->gt.awake) | ||
---|---|---|---|
Product: | DRI | Reporter: | Martin Peres <martin.peres> |
Component: | DRM/Intel | Assignee: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Status: | CLOSED FIXED | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Severity: | normal | ||
Priority: | medium | CC: | intel-gfx-bugs |
Version: | XOrg git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | ReadyForDev | ||
i915 platform: | SKL | i915 features: | GEM/execlists |
Description
Martin Peres
2018-07-18 11:46:30 UTC
Nope, this is a separate race between the tasklet and idle work. <0>[ 204.710681] kworker/-7 2.... 204771752us : i915_gem_park: <0>[ 204.710712] kworker/-7 2.... 204771768us : i915_gem_switch_to_kernel_context: awake?=yes <0>[ 204.710746] kworker/-7 2.... 204771769us : i915_gem_idle_work_handler: active_requests=0 (after switch-to-kernel-context) <0>[ 204.710785] kworker/-7 2.... 204771771us : execlists_submission_tasklet: rcs0 awake?=1, active=5 <0>[ 204.710822] kworker/-7 2d..1 204771772us : process_csb: rcs0 cs-irq head=5, tail=0 <0>[ 204.710859] kworker/-7 2d..1 204771773us : process_csb: rcs0 csb[0]: status=0x00000018:0x00000000, active=0x5 <0>[ 204.710898] kworker/-7 2d..1 204771773us : process_csb: rcs0 out[0]: ctx=0.1, global=35845 (fence 5:43) (current 35845), prio=-1024 <0>[ 204.710936] kworker/-7 2d..1 204771779us : process_csb: rcs0 completed ctx=0 <0>[ 204.710970] kworker/-7 2.... 204771790us : i915_gem_idle_work_handler: <0>[ 204.711005] <idle>-0 0..s1 204771804us : execlists_submission_tasklet: rcs0 awake?=1, active=0 <0>[ 204.711042] <idle>-0 0d.s2 204771889us : __execlists_submission_tasklet: __execlists_submission_tasklet:1121 GEM_BUG_ON(!engine->i915->gt.awake) It shouldn't be possible... It requires us to set gt.awake=false while the tasklet is running, but before we do we park the engines and flush the tasklets in the process. We must have kicked off another tasklet_schedule after intel_engines_park. I think... diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index db5351e6a3a5..6921406a7250 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1074,7 +1074,7 @@ static void execlists_submission_tasklet(unsigned long data) spin_lock_irqsave(&engine->timeline.lock, flags); - if (engine->i915->gt.awake) /* we may be delayed until after we idle! */ + if (engine->execlists.active) /* we may be delayed until after idle! */ __execlists_submission_tasklet(engine); spin_unlock_irqrestore(&engine->timeline.lock, flags); commit d78d3343dce7787a5f7fd0b3d522a3510fd26ef9 (HEAD -> drm-intel-next-queued, drm-intel/for-linux-next, drm-intel/drm-intel-next-queued) Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Jul 19 08:50:29 2018 +0100 drm/i915/execlists: Move the assertion we have the rpm wakeref down There's a race between idling the engine and finishing off the last tasklet (as we may kick the tasklets after declaring an individual engine idle). However, since we do not need to access the device until we try to submit to the ELSP register (processing the CSB just requires normal CPU access to the HWSP, and when idle we should not need to submit!) we can defer the assertion unto that point. The assertion is still useful as it does verify that we do hold the longterm GT wakeref taken from request allocation until request completion. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107274 Fixes: 9512f985c32d ("drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd)") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180719075029.28643-1-chris@chris-wilson.co.uk Martin, OK to close? Not seen for a month, closing. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.