Summary: | [BAT BSW] lockdep splat in drm_gem_mmap | ||||||
---|---|---|---|---|---|---|---|
Product: | DRI | Reporter: | Joonas Lahtinen <joonas.lahtinen> | ||||
Component: | DRM/Intel | Assignee: | Chris Wilson <chris> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | Intel GFX Bugs mailing list <intel-gfx-bugs> | ||||
Severity: | normal | ||||||
Priority: | medium | CC: | intel-gfx-bugs | ||||
Version: | unspecified | ||||||
Hardware: | x86-64 (AMD64) | ||||||
OS: | Linux (All) | ||||||
Whiteboard: | |||||||
i915 platform: | i915 features: | ||||||
Attachments: |
|
Description
Joonas Lahtinen
2016-03-30 11:07:55 UTC
That one should be the kernfs one, which goes back to the question why is this only sporadic in CI? Also in gem_mmap_gtt, same chain: [ 145.549678] ====================================================== [ 145.549739] [ INFO: possible circular locking dependency detected ] [ 145.549803] 4.6.0-rc1-gfxbench+ #1 Tainted: G U [ 145.549858] ------------------------------------------------------- [ 145.549919] gem_mmap_gtt/5996 is trying to acquire lock: [ 145.549971] (&dev->struct_mutex){+.+.+.}, at: [<ffffffff8151c781>] drm_gem_mmap+0x1a1/0x270 [ 145.550077] but task is already holding lock: [ 145.550134] (&mm->mmap_sem){++++++}, at: [<ffffffff81183204>] vm_mmap_pgoff+0x44/0xa0 [ 145.550230] which lock already depends on the new lock. Patch was merged to our local CI topic branch, seems to have been effective for past two runs (which is still quite low confidence level): commit 6954af8b55f3b00b08f7759f479c41388fbe364f Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Mar 31 11:45:06 2016 +0100 kernfs: Move faulting copy_user operations outside of the mutex Greg K-H will merge the patch upstream for 4.7-rc1. Might as well keep the records straight and close the one mentioned in the patch. *** This bug has been marked as a duplicate of bug 94350 *** Closing as duplicate of closed+fixed. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.