Summary: | [CI][SHARDS] GLK: igt@gem_eio@kms - dmesg-warn - WARNING: possible circular locking dependency detected | ||
---|---|---|---|
Product: | DRI | Reporter: | Lakshmi <lakshminarayana.vudum> |
Component: | DRM/Intel | Assignee: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Status: | CLOSED FIXED | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> |
Severity: | not set | ||
Priority: | not set | CC: | intel-gfx-bugs |
Version: | DRI git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | GLK | i915 features: | GEM/Other |
Description
Lakshmi
2019-10-02 13:20:48 UTC
The CI Bug Log issue associated to this bug has been updated. ### New filters associated * GLK: igt@eio@kms - dmesg-warn - WARNING: possible circular locking dependency detected - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14597/shard-glk2/igt@gem_eio@kms.html - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6980/shard-glk4/igt@gem_eio@kms.html - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6981/shard-glk7/igt@gem_eio@kms.html - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14602/shard-glk5/igt@gem_eio@kms.html - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6983/shard-glk7/igt@gem_eio@kms.html - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14611/shard-glk9/igt@gem_eio@kms.html - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14615/shard-glk9/igt@gem_eio@kms.html The lock inside i915_active_wait() is due for elimination, which will break this cycle. But... Not sure yet if we don't end up replacing it with another cycle instead. commit b1e3177bd1d8f41e2a9cc847e56a96cdc0eefe62 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Oct 4 14:40:00 2019 +0100 drm/i915: Coordinate i915_active with its own mutex Forgo the struct_mutex serialisation for i915_active, and interpose its own mutex handling for active/retire. This is a multi-layered sleight-of-hand. First, we had to ensure that no active/retire callbacks accidentally inverted the mutex ordering rules, nor assumed that they were themselves serialised by struct_mutex. More challenging though, is the rule over updating elements of the active rbtree. Instead of the whole i915_active now being serialised by struct_mutex, allocations/rotations of the tree are serialised by the i915_active.mutex and individual nodes are serialised by the caller using the i915_timeline.mutex (we need to use nested spinlocks to interact with the dma_fence callback lists). The pain point here is that instead of a single mutex around execbuf, we now have to take a mutex for active tracker (one for each vma, context, etc) and a couple of spinlocks for each fence update. The improvement in fine grained locking allowing for multiple concurrent clients (eventually!) should be worth it in typical loads. v2: Add some comments that barely elucidate anything :( Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk This issue occurred thrice in 4 runs and last seen CI_DRM_6983_full (3 weeks old), current run is 7155. Closing and archiving this bug. The CI Bug Log issue associated to this bug has been archived. New failures matching the above filters will not be associated to this bug anymore. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.