Bug 111928

Summary: [CI][SHARDS] igt@kms_cursor_legacy@basic-flip-after-cursor-atomic - dmesg-warn - WARNING: possible circular locking dependency detected
Product: DRI Reporter: Lakshmi <lakshminarayana.vudum>
Component: DRM/IntelAssignee: Intel GFX Bugs mailing list <intel-gfx-bugs>
Status: RESOLVED FIXED QA Contact: Intel GFX Bugs mailing list <intel-gfx-bugs>
Severity: not set    
Priority: not set CC: intel-gfx-bugs
Version: DRI git   
Hardware: Other   
OS: All   
Whiteboard:
i915 platform: ICL, TGL i915 features:

Description Lakshmi 2019-10-08 16:57:48 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7001/shard-iclb1/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html

<4> [26.206212] ======================================================
<4> [26.206213] WARNING: possible circular locking dependency detected
<4> [26.206215] 5.4.0-rc1-CI-CI_DRM_7001+ #1 Tainted: G     U           
<4> [26.206216] ------------------------------------------------------
<4> [26.206217] kms_cursor_lega/1203 is trying to acquire lock:
<4> [26.206218] ffff88849038f958 (&mapping->i_mmap_rwsem){++++}, at: unmap_mapping_pages+0x48/0x130
<4> [26.206224] 
but task is already holding lock:
<4> [26.206226] ffff8884859093a0 (&vm->mutex){+.+.}, at: i915_vma_unbind+0xe6/0x4a0 [i915]
<4> [26.206272] 
which lock already depends on the new lock.

<4> [26.206273] 
the existing dependency chain (in reverse order) is:
<4> [26.206274] 
-> #3 (&vm->mutex){+.+.}:
<4> [26.206316]        i915_gem_shrinker_taints_mutex+0x6d/0xe0 [i915]
<4> [26.206358]        i915_address_space_init+0x9f/0x160 [i915]
<4> [26.206400]        i915_ggtt_init_hw+0x55/0x170 [i915]
<4> [26.206433]        i915_driver_probe+0xc24/0x15d0 [i915]
<4> [26.206466]        i915_pci_probe+0x43/0x1b0 [i915]
<4> [26.206468]        pci_device_probe+0x9e/0x120
<4> [26.206471]        really_probe+0xea/0x420
<4> [26.206472]        driver_probe_device+0x10b/0x120
<4> [26.206474]        device_driver_attach+0x4a/0x50
<4> [26.206476]        __driver_attach+0x97/0x130
<4> [26.206478]        bus_for_each_dev+0x74/0xc0
<4> [26.206479]        bus_add_driver+0x142/0x220
<4> [26.206481]        driver_register+0x56/0xf0
<4> [26.206483]        do_one_initcall+0x58/0x2ff
<4> [26.206486]        do_init_module+0x56/0x1f8
<4> [26.206488]        load_module+0x243e/0x29f0
<4> [26.206489]        __do_sys_finit_module+0xe9/0x110
<4> [26.206491]        do_syscall_64+0x4f/0x210
<4> [26.206494]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [26.206495] 
-> #2 (&dev->struct_mutex/1){+.+.}:
<4> [26.206498]        __mutex_lock+0x9a/0x9d0
<4> [26.206538]        userptr_mn_invalidate_range_start+0x1aa/0x200 [i915]
<4> [26.206540]        __mmu_notifier_invalidate_range_start+0xa3/0x180
<4> [26.206542]        unmap_vmas+0x143/0x150
<4> [26.206544]        unmap_region+0xa3/0x100
<4> [26.206546]        __do_munmap+0x25d/0x490
<4> [26.206547]        __vm_munmap+0x6e/0xc0
<4> [26.206548]        __x64_sys_munmap+0x12/0x20
<4> [26.206550]        do_syscall_64+0x4f/0x210
<4> [26.206552]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [26.206553] 
-> #1 (mmu_notifier_invalidate_range_start){+.+.}:
<4> [26.206556]        page_mkclean_one+0xda/0x210
<4> [26.206557]        rmap_walk_file+0xff/0x260
<4> [26.206559]        page_mkclean+0x9f/0xb0
<4> [26.206561]        clear_page_dirty_for_io+0xa2/0x300
<4> [26.206564]        mpage_submit_page+0x1a/0x70
<4> [26.206565]        mpage_process_page_bufs+0xe7/0x110
<4> [26.206567]        mpage_prepare_extent_to_map+0x1d2/0x2b0
<4> [26.206569]        ext4_writepages+0x592/0x1230
<4> [26.206570]        do_writepages+0x46/0xe0
<4> [26.206572]        __filemap_fdatawrite_range+0xc6/0x100
<4> [26.206574]        file_write_and_wait_range+0x3c/0x90
<4> [26.206576]        ext4_sync_file+0x154/0x500
<4> [26.206578]        do_fsync+0x33/0x60
<4> [26.206580]        __x64_sys_fsync+0xb/0x10
<4> [26.206581]        do_syscall_64+0x4f/0x210
<4> [26.206583]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [26.206584] 
-> #0 (&mapping->i_mmap_rwsem){++++}:
<4> [26.206587]        __lock_acquire+0x1328/0x15d0
<4> [26.206589]        lock_acquire+0xa7/0x1c0
<4> [26.206591]        down_write+0x33/0x70
<4> [26.206592]        unmap_mapping_pages+0x48/0x130
<4> [26.206635]        i915_vma_revoke_mmap+0x81/0x1b0 [i915]
<4> [26.206677]        i915_vma_unbind+0xee/0x4a0 [i915]
<4> [26.206717]        i915_gem_object_ggtt_pin+0xee/0x430 [i915]
<4> [26.206756]        i915_gem_object_pin_to_display_plane+0xd1/0x130 [i915]
<4> [26.206799]        intel_pin_and_fence_fb_obj+0xb3/0x230 [i915]
<4> [26.206842]        intel_plane_pin_fb+0x3c/0xd0 [i915]
<4> [26.206885]        intel_prepare_plane_fb+0x144/0x5d0 [i915]
<4> [26.206888]        drm_atomic_helper_prepare_planes+0x85/0x110
<4> [26.206930]        intel_atomic_commit+0xc6/0x2f0 [i915]
<4> [26.206932]        drm_mode_atomic_ioctl+0x847/0x930
<4> [26.206935]        drm_ioctl_kernel+0xa7/0xf0
<4> [26.206936]        drm_ioctl+0x2e1/0x390
<4> [26.206938]        do_vfs_ioctl+0xa0/0x6f0
<4> [26.206940]        ksys_ioctl+0x35/0x60
<4> [26.206942]        __x64_sys_ioctl+0x11/0x20
<4> [26.206943]        do_syscall_64+0x4f/0x210
<4> [26.206945]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [26.206946] 
other info that might help us debug this:

<4> [26.206947] Chain exists of:
  &mapping->i_mmap_rwsem --> &dev->struct_mutex/1 --> &vm->mutex

<4> [26.206950]  Possible unsafe locking scenario:

<4> [26.206950]        CPU0                    CPU1
<4> [26.206951]        ----                    ----
<4> [26.206952]   lock(&vm->mutex);
<4> [26.206954]                                lock(&dev->struct_mutex/1);
<4> [26.206955]                                lock(&vm->mutex);
<4> [26.206956]   lock(&mapping->i_mmap_rwsem);
<4> [26.206958] 
 *** DEADLOCK ***
Comment 2 Chris Wilson 2019-10-09 22:34:14 UTC
Fixed dup.
Comment 3 CI Bug Log 2019-10-16 10:26:28 UTC
The CI Bug Log issue associated to this bug has been archived.

New failures matching the above filters will not be associated to this bug anymore.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.