Summary: | [CI][SHARDS]igt@gem_persistent_relocs@forked-interruptible-thrashing|igt@gem_pipe_control_store_loop@reused-buffer - dmesg-warn - WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected | ||
---|---|---|---|
Product: | DRI | Reporter: | Lakshmi <lakshminarayana.vudum> |
Component: | DRM/Intel | Assignee: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Status: | RESOLVED DUPLICATE | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Severity: | not set | ||
Priority: | not set | CC: | intel-gfx-bugs |
Version: | DRI git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | BYT, GLK | i915 features: | GEM/Other |
Description
Lakshmi
2019-09-23 13:27:36 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5194/shard-glk1/igt@gem_pipe_control_store_loop@reused-buffer.html <6> [2763.705978] Console: switching to colour dummy device 80x25 <6> [2763.706073] [IGT] gem_pipe_control_store_loop: executing <6> [2763.719176] [IGT] gem_pipe_control_store_loop: starting subtest reused-buffer <4> [2799.682286] <4> [2799.682295] ===================================================== <4> [2799.682300] WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected <4> [2799.682305] 5.3.0-CI-CI_DRM_6927+ #1 Tainted: G U <4> [2799.682309] ----------------------------------------------------- <4> [2799.682313] kworker/u8:15/3290 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: <4> [2799.682318] ffff888273cd9e08 (&(&lock->wait_lock)->rlock){+.+.}, at: __mutex_unlock_slowpath+0x18e/0x2b0 <4> [2799.682331] and this task is already holding: <4> [2799.682335] ffff88825db5c320 (&(&timelines->lock)->rlock){-...}, at: i915_retire_requests+0x14c/0x2e0 [i915] <4> [2799.682435] which would create a new lock dependency: <4> [2799.682438] (&(&timelines->lock)->rlock){-...} -> (&(&lock->wait_lock)->rlock){+.+.} <4> [2799.682444] but this new dependency connects a HARDIRQ-irq-safe lock: <4> [2799.682448] (&(&timelines->lock)->rlock){-...} <4> [2799.682449] ... which became HARDIRQ-irq-safe at: <4> [2799.682459] lock_acquire+0xa6/0x1c0 <4> [2799.682464] _raw_spin_lock_irqsave+0x33/0x50 <4> [2799.682527] intel_timeline_enter+0x64/0x150 [i915] <4> [2799.682588] __engine_park+0xa9/0x380 [i915] <4> [2799.682648] ____intel_wakeref_put_last+0x1c/0x70 [i915] <4> [2799.682707] i915_sample+0x2ed/0x310 [i915] <4> [2799.682712] __hrtimer_run_queues+0x11e/0x4b0 <4> [2799.682717] hrtimer_interrupt+0xea/0x250 <4> [2799.682722] smp_apic_timer_interrupt+0x96/0x280 <4> [2799.682726] apic_timer_interrupt+0xf/0x20 <4> [2799.682730] mutex_spin_on_owner+0x81/0x140 <4> [2799.682733] __mutex_lock+0x5f9/0x9b0 <4> [2799.682796] __i915_gem_free_objects+0x7b/0x4b0 [i915] <4> [2799.682802] process_one_work+0x245/0x610 <4> [2799.682805] worker_thread+0x37/0x380 <4> [2799.682810] kthread+0x119/0x130 <4> [2799.682813] ret_from_fork+0x24/0x50 <4> [2799.682816] to a HARDIRQ-irq-unsafe lock: <4> [2799.682820] (&(&lock->wait_lock)->rlock){+.+.} <4> [2799.682821] ... which became HARDIRQ-irq-unsafe at: <4> [2799.682827] ... <4> [2799.682829] lock_acquire+0xa6/0x1c0 <4> [2799.682834] _raw_spin_lock+0x2a/0x40 <4> [2799.682837] __mutex_lock+0x18a/0x9b0 <4> [2799.682842] pipe_wait+0x8f/0xc0 <4> [2799.682845] pipe_read+0x235/0x310 <4> [2799.682849] new_sync_read+0x106/0x1a0 <4> [2799.682853] vfs_read+0x9e/0x160 <4> [2799.682856] ksys_read+0x8f/0xe0 <4> [2799.682860] do_syscall_64+0x4f/0x210 <4> [2799.682864] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4> [2799.682867] other info that might help us debug this: <4> [2799.682873] Possible interrupt unsafe locking scenario: <4> [2799.682877] CPU0 CPU1 <4> [2799.682880] ---- ---- <4> [2799.682883] lock(&(&lock->wait_lock)->rlock); <4> [2799.682887] local_irq_disable(); <4> [2799.682890] lock(&(&timelines->lock)->rlock); <4> [2799.682895] lock(&(&lock->wait_lock)->rlock); <4> [2799.682899] <Interrupt> <4> [2799.682901] lock(&(&timelines->lock)->rlock); <4> [2799.682905] *** DEADLOCK *** The CI Bug Log issue associated to this bug has been updated. ### New filters associated * BYT GLK: igt@gem_persistent_relocs@forked-interruptible-thrashing|igt@gem_pipe_control_store_loop@reused-buffer - dmesg-warn - WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected - https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_373/fi-byt-j1900/igt@gem_persistent_relocs@forked-interruptible-thrashing.html - https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5194/shard-glk1/igt@gem_pipe_control_store_loop@reused-buffer.html Same perf lockdep, just delayed until another test completed the loop. *** This bug has been marked as a duplicate of bug 111626 *** A CI Bug Log filter associated to this bug has been updated: {- BYT GLK: igt@gem_persistent_relocs@forked-interruptible-thrashing|igt@gem_pipe_control_store_loop@reused-buffer - dmesg-warn - WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected -} {+ BYT SKL GLK: igt@gem_persistent_relocs@forked-interruptible-thrashing|igt@gem_pipe_control_store_loop@reused-buffe|igt@i915_pm_rps@waitboost - dmesg-warn - WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected +} New failures caught by the filter: * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5204/shard-skl2/igt@i915_pm_rps@waitboost.html A CI Bug Log filter associated to this bug has been updated: {- BYT SKL GLK: igt@gem_persistent_relocs@forked-interruptible-thrashing|igt@gem_pipe_control_store_loop@reused-buffe|igt@i915_pm_rps@waitboost - dmesg-warn - WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected -} {+ BYT APL SKL GLK: igt@gem_persistent_relocs@forked-interruptible-thrashing|igt@gem_pipe_control_store_loop@reused-buffe|igt@i915_pm_rps@waitboost - dmesg-warn - WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected +} New failures caught by the filter: * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6967/shard-apl7/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.