Bug 111904 - [CI][SHARDS] igt@kms_psr@sprite_render|igt@gem_tiled_blits@normal|igt@gem_mmap_gtt@basic-small-copy - dmesg-warn - WARNING: possible circular locking dependency detected
Summary: [CI][SHARDS] igt@kms_psr@sprite_render|igt@gem_tiled_blits@normal|igt@gem_mm...
Status: RESOLVED FIXED
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: DRI git
Hardware: Other All
: not set not set
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-10-04 07:26 UTC by Lakshmi
Modified: 2019-10-04 16:08 UTC (History)
1 user (show)

See Also:
i915 platform: KBL, SKL
i915 features: GEM/Other


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Lakshmi 2019-10-04 07:26:28 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6994/shard-skl5/igt@kms_psr@sprite_render.html

4> [159.748743] ======================================================
<4> [159.748753] WARNING: possible circular locking dependency detected
<4> [159.748767] 5.4.0-rc1-CI-CI_DRM_6994+ #1 Tainted: G     U           
<4> [159.748775] ------------------------------------------------------
<4> [159.748787] kms_psr/969 is trying to acquire lock:
<4> [159.748796] ffff88816b0a1698 (&mapping->i_mmap_rwsem){++++}, at: unmap_mapping_pages+0x48/0x130
<4> [159.748835] 
but task is already holding lock:
<4> [159.748846] ffff88816fb193a0 (&vm->mutex){+.+.}, at: i915_vma_unbind+0xa4/0x4a0 [i915]
<4> [159.749258] 
which lock already depends on the new lock.

<4> [159.749266] 
the existing dependency chain (in reverse order) is:
<4> [159.749274] 
-> #3 (&vm->mutex){+.+.}:
<4> [159.749654]        i915_gem_shrinker_taints_mutex+0x6d/0xe0 [i915]
<4> [159.750037]        i915_address_space_init+0x9f/0x160 [i915]
<4> [159.750414]        i915_ggtt_init_hw+0x55/0x170 [i915]
<4> [159.750682]        i915_driver_probe+0xc24/0x15d0 [i915]
<4> [159.750960]        i915_pci_probe+0x43/0x1b0 [i915]
<4> [159.750979]        pci_device_probe+0x9e/0x120
<4> [159.750998]        really_probe+0xea/0x420
<4> [159.751016]        driver_probe_device+0x10b/0x120
<4> [159.751034]        device_driver_attach+0x4a/0x50
<4> [159.751052]        __driver_attach+0x97/0x130
<4> [159.751069]        bus_for_each_dev+0x74/0xc0
<4> [159.751087]        bus_add_driver+0x142/0x220
<4> [159.751104]        driver_register+0x56/0xf0
<4> [159.751121]        do_one_initcall+0x58/0x2ff
<4> [159.751141]        do_init_module+0x56/0x1f8
<4> [159.751159]        load_module+0x243e/0x29f0
<4> [159.751178]        __do_sys_finit_module+0xe9/0x110
<4> [159.751193]        do_syscall_64+0x4f/0x210
<4> [159.751215]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [159.751225] 
-> #2 (&dev->struct_mutex/1){+.+.}:
<4> [159.751254]        __mutex_lock+0x9a/0x9d0
<4> [159.751614]        userptr_mn_invalidate_range_start+0x1aa/0x200 [i915]
<4> [159.751635]        __mmu_notifier_invalidate_range_start+0xa3/0x180
<4> [159.751654]        unmap_vmas+0x143/0x150
<4> [159.751669]        unmap_region+0xa3/0x100
<4> [159.751684]        __do_munmap+0x25d/0x490
<4> [159.751699]        __vm_munmap+0x6e/0xc0
<4> [159.751713]        __x64_sys_munmap+0x12/0x20
<4> [159.751729]        do_syscall_64+0x4f/0x210
<4> [159.751749]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [159.751759] 
-> #1 (mmu_notifier_invalidate_range_start){+.+.}:
<4> [159.751786]        page_mkclean_one+0xda/0x210
<4> [159.751803]        rmap_walk_file+0xff/0x260
<4> [159.751821]        page_mkclean+0x9f/0xb0
<4> [159.751838]        clear_page_dirty_for_io+0xa2/0x300
<4> [159.751856]        mpage_submit_page+0x1a/0x70
<4> [159.751872]        mpage_process_page_bufs+0xe7/0x110
<4> [159.751889]        mpage_prepare_extent_to_map+0x1d2/0x2b0
<4> [159.751906]        ext4_writepages+0x592/0x1230
<4> [159.751921]        do_writepages+0x46/0xe0
<4> [159.751941]        __filemap_fdatawrite_range+0xc6/0x100
<4> [159.751960]        file_write_and_wait_range+0x3c/0x90
<4> [159.751974]        ext4_sync_file+0x154/0x500
<4> [159.751993]        do_fsync+0x33/0x60
<4> [159.752011]        __x64_sys_fsync+0xb/0x10
<4> [159.752025]        do_syscall_64+0x4f/0x210
<4> [159.752045]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [159.752054] 
-> #0 (&mapping->i_mmap_rwsem){++++}:
<4> [159.752084]        __lock_acquire+0x1328/0x15d0
<4> [159.752102]        lock_acquire+0xa7/0x1c0
<4> [159.752119]        down_write+0x33/0x70
<4> [159.752139]        unmap_mapping_pages+0x48/0x130
<4> [159.752528]        i915_vma_revoke_mmap+0x81/0x1b0 [i915]
<4> [159.752900]        fence_update+0xb4/0x260 [i915]
<4> [159.753287]        i915_vma_unbind+0xac/0x4a0 [i915]
<4> [159.753659]        i915_gem_object_ggtt_pin+0xee/0x430 [i915]
<4> [159.754010]        i915_gem_object_pin_to_display_plane+0xd1/0x130 [i915]
<4> [159.754409]        intel_pin_and_fence_fb_obj+0xb3/0x230 [i915]
<4> [159.754758]        intel_plane_pin_fb+0x3c/0xd0 [i915]
<4> [159.755153]        intel_prepare_plane_fb+0x144/0x5d0 [i915]
<4> [159.755177]        drm_atomic_helper_prepare_planes+0x85/0x110
<4> [159.755516]        intel_atomic_commit+0xc6/0x2f0 [i915]
<4> [159.755533]        drm_atomic_helper_set_config+0x61/0x90
<4> [159.755543]        drm_mode_setcrtc+0x18e/0x720
<4> [159.755556]        drm_ioctl_kernel+0xa7/0xf0
<4> [159.755567]        drm_ioctl+0x2e1/0x390
<4> [159.755580]        do_vfs_ioctl+0xa0/0x6f0
<4> [159.755591]        ksys_ioctl+0x35/0x60
<4> [159.755603]        __x64_sys_ioctl+0x11/0x20
<4> [159.755613]        do_syscall_64+0x4f/0x210
<4> [159.755627]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [159.755633] 
other info that might help us debug this:

<4> [159.755639] Chain exists of:
  &mapping->i_mmap_rwsem --> &dev->struct_mutex/1 --> &vm->mutex

<4> [159.755654]  Possible unsafe locking scenario:

<4> [159.755660]        CPU0                    CPU1
<4> [159.755665]        ----                    ----
<4> [159.755669]   lock(&vm->mutex);
<4> [159.755677]                                lock(&dev->struct_mutex/1);
<4> [159.755686]                                lock(&vm->mutex);
<4> [159.755693]   lock(&mapping->i_mmap_rwsem);
<4> [159.755701] 
 *** DEADLOCK ***
Comment 2 Chris Wilson 2019-10-04 10:23:36 UTC
This one, this I can fix!
Comment 3 Chris Wilson 2019-10-04 16:08:35 UTC
commit 2850748ef8763ab46958e43a4d1c445f29eeb37d
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Fri Oct 4 14:39:58 2019 +0100

    drm/i915: Pull i915_vma_pin under the vm->mutex
    
    Replace the struct_mutex requirement for pinning the i915_vma with the
    local vm->mutex instead. Note that the vm->mutex is tainted by the
    shrinker (we require unbinding from inside fs-reclaim) and so we cannot
    allocate while holding that mutex. Instead we have to preallocate
    workers to do allocate and apply the PTE updates after we have we
    reserved their slot in the drm_mm (using fences to order the PTE writes
    with the GPU work and with later unbind).
    
    In adding the asynchronous vma binding, one subtle requirement is to
    avoid coupling the binding fence into the backing object->resv. That is
    the asynchronous binding only applies to the vma timeline itself and not
    to the pages as that is a more global timeline (the binding of one vma
    does not need to be ordered with another vma, nor does the implicit GEM
    fencing depend on a vma, only on writes to the backing store). Keeping
    the vma binding distinct from the backing store timelines is verified by
    a number of async gem_exec_fence and gem_exec_schedule tests. The way we
    do this is quite simple, we keep the fence for the vma binding separate
    and only wait on it as required, and never add it to the obj->resv
    itself.
    
    Another consequence in reducing the locking around the vma is the
    destruction of the vma is no longer globally serialised by struct_mutex.
    A natural solution would be to add a kref to i915_vma, but that requires
    decoupling the reference cycles, possibly by introducing a new
    i915_mm_pages object that is own by both obj->mm and vma->pages.
    However, we have not taken that route due to the overshadowing lmem/ttm
    discussions, and instead play a series of complicated games with
    trylocks to (hopefully) ensure that only one destruction path is called!
    
    v2: Add some commentary, and some helpers to reduce patch churn.
    
    Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
    Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
    Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.