https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-snb3/igt@perf_pmu@other-read-0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl7/igt@perf_pmu@other-read-0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-glkb6/igt@perf_pmu@other-read-0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-hsw4/igt@perf_pmu@other-read-0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-kbl3/igt@perf_pmu@other-read-0.html [ 376.994271] ====================================================== [ 376.994277] WARNING: possible circular locking dependency detected [ 376.994285] 4.14.0-CI-CI_DRM_3372+ #1 Tainted: G U [ 376.994290] ------------------------------------------------------ [ 376.994296] perf_pmu/1704 is trying to acquire lock: [ 376.994301] (&mm->mmap_sem){++++}, at: [<ffffffff811bfe1e>] __might_fault+0x3e/0x90 [ 376.994318] but task is already holding lock: [ 376.994323] (&cpuctx_mutex){+.+.}, at: [<ffffffff8116fe8c>] perf_event_ctx_lock_nested+0xbc/0x1d0 [ 376.994337] which lock already depends on the new lock. [ 376.994344] the existing dependency chain (in reverse order) is: [ 376.994351] -> #4 (&cpuctx_mutex){+.+.}: [ 376.994363] __mutex_lock+0x86/0x9b0 [ 376.994369] perf_event_init_cpu+0x5a/0x90 [ 376.994376] perf_event_init+0x178/0x1a4 [ 376.994384] start_kernel+0x27f/0x3f1 [ 376.994391] verify_cpu+0x0/0xfb [ 376.994395] -> #3 (pmus_lock){+.+.}: [ 376.994405] __mutex_lock+0x86/0x9b0 [ 376.994412] perf_event_init_cpu+0x21/0x90 [ 376.994418] cpuhp_invoke_callback+0xca/0xc00 [ 376.994424] _cpu_up+0xa7/0x170 [ 376.994429] do_cpu_up+0x57/0x70 [ 376.994435] smp_init+0x62/0xa6 [ 376.994441] kernel_init_freeable+0x97/0x193 [ 376.994448] kernel_init+0xa/0x100 [ 376.994454] ret_from_fork+0x27/0x40 [ 376.994458] -> #2 (cpu_hotplug_lock.rw_sem){++++}: [ 376.994469] cpus_read_lock+0x39/0xa0 [ 376.994476] apply_workqueue_attrs+0x12/0x50 [ 376.994483] __alloc_workqueue_key+0x1d8/0x4d8 [ 376.994548] i915_gem_init_userptr+0x5f/0x80 [i915] [ 376.994600] i915_gem_init+0x7c/0x390 [i915] [ 376.994645] i915_driver_load+0x99e/0x15c0 [i915] [ 376.994691] i915_pci_probe+0x33/0x90 [i915] [ 376.994700] pci_device_probe+0xa1/0x130 [ 376.994707] driver_probe_device+0x293/0x440 [ 376.994713] __driver_attach+0xde/0xe0 [ 376.994719] bus_for_each_dev+0x5c/0x90 [ 376.994726] bus_add_driver+0x16d/0x260 [ 376.994732] driver_register+0x57/0xc0 [ 376.994737] do_one_initcall+0x3e/0x160 [ 376.994744] do_init_module+0x5b/0x1fa [ 376.994750] load_module+0x2374/0x2dc0 [ 376.994756] SyS_finit_module+0xaa/0xe0 [ 376.994762] do_syscall_64+0x5e/0x170 [ 376.994768] return_from_SYSCALL_64+0x0/0x7a [ 376.994772] -> #1 (&dev->struct_mutex){+.+.}: [ 376.994784] __mutex_lock+0x86/0x9b0 [ 376.994833] i915_mutex_lock_interruptible+0x4c/0x130 [i915] [ 376.994908] i915_gem_fault+0x206/0x760 [i915] [ 376.994920] __do_fault+0x1a/0x70 [ 376.994927] __handle_mm_fault+0x9b0/0xdb0 [ 376.994935] handle_mm_fault+0x154/0x300 [ 376.994943] __do_page_fault+0x2d6/0x570 [ 376.994951] page_fault+0x22/0x30 [ 376.994957] -> #0 (&mm->mmap_sem){++++}: [ 376.994973] lock_acquire+0xaf/0x200 [ 376.994981] __might_fault+0x68/0x90 [ 376.994989] _copy_to_user+0x1e/0x70 [ 376.994997] perf_read+0x1aa/0x290 [ 376.995005] __vfs_read+0x23/0x120 [ 376.995012] vfs_read+0xa3/0x150 [ 376.995019] SyS_read+0x45/0xb0 [ 376.995027] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 376.995033] other info that might help us debug this: [ 376.995045] Chain exists of: &mm->mmap_sem --> pmus_lock --> &cpuctx_mutex [ 376.995064] Possible unsafe locking scenario: [ 376.995072] CPU0 CPU1 [ 376.995078] ---- ---- [ 376.995084] lock(&cpuctx_mutex); [ 376.995092] lock(pmus_lock); [ 376.995104] lock(&cpuctx_mutex); [ 376.995113] lock(&mm->mmap_sem); [ 376.995121] *** DEADLOCK *** [ 376.995131] 1 lock held by perf_pmu/1704: [ 376.995137] #0: (&cpuctx_mutex){+.+.}, at: [<ffffffff8116fe8c>] perf_event_ctx_lock_nested+0xbc/0x1d0 [ 376.995155] stack backtrace: [ 376.995165] CPU: 1 PID: 1704 Comm: perf_pmu Tainted: G U 4.14.0-CI-CI_DRM_3372+ #1 [ 376.995176] Hardware name: /NUC6CAYB, BIOS AYAPLCEL.86A.0040.2017.0619.1722 06/19/2017 [ 376.995187] Call Trace: [ 376.995197] dump_stack+0x5f/0x86 [ 376.995207] print_circular_bug.isra.18+0x1d0/0x2c0 [ 376.995217] __lock_acquire+0x19c3/0x1b60 [ 376.995227] ? generic_exec_single+0x77/0xe0 [ 376.995237] ? lock_acquire+0xaf/0x200 [ 376.995245] lock_acquire+0xaf/0x200 [ 376.995254] ? __might_fault+0x3e/0x90 [ 376.995263] __might_fault+0x68/0x90 [ 376.995271] ? __might_fault+0x3e/0x90 [ 376.995280] _copy_to_user+0x1e/0x70 [ 376.995288] perf_read+0x1aa/0x290 [ 376.995298] __vfs_read+0x23/0x120 [ 376.995307] ? entry_SYSCALL_64_fastpath+0x5/0xb1 [ 376.995316] vfs_read+0xa3/0x150 [ 376.995324] SyS_read+0x45/0xb0 [ 376.995332] entry_SYSCALL_64_fastpath+0x1c/0xb1 [ 376.995341] RIP: 0033:0x7f4bf6e9a6d0 [ 376.995347] RSP: 002b:00007fffbff1f938 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 376.995359] RAX: ffffffffffffffda RBX: ffffc900006bfff0 RCX: 00007f4bf6e9a6d0 [ 376.995368] RDX: 0000000000000010 RSI: 00007fffbff1f940 RDI: 0000000000000005 [ 376.995377] RBP: 0000000000000005 R08: 0000565219788560 R09: 0000000000000000 [ 376.995386] R10: 0000000000000073 R11: 0000000000000246 R12: 0000000000000046 [ 376.995395] R13: 00007fffbff21100 R14: 0000000000000000 R15: 0000000000000000
These also looks identical: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl7/igt@perf_pmu@busy-no-semaphores-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-kbl3/igt@perf_pmu@other-read-0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-hsw2/igt@perf_pmu@busy-no-semaphores-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl4/igt@perf_pmu@busy-no-semaphores-rcs0.html
and these: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl3/igt@perf_pmu@multi-client-vcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-glkb6/igt@perf_pmu@multi-client-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl1/igt@perf_pmu@busy-check-all-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl8/igt@perf_pmu@busy-check-all-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl6/igt@perf_pmu@busy-check-all-rcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl3/igt@perf_pmu@busy-check-all-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl5/igt@perf_pmu@most-busy-check-all-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl6/igt@perf_pmu@multi-client-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-glkb1/igt@perf_pmu@multi-client-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-kbl4/igt@perf_pmu@multi-client-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl3/igt@perf_pmu@multi-client-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl6/igt@perf_pmu@multi-client-rcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl2/igt@perf_pmu@multi-client-bcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl4/igt@perf_pmu@render-node-busy-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl6/igt@perf_pmu@all-busy-check-all.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-hsw7/igt@perf_pmu@all-busy-check-all.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-snb5/igt@perf_pmu@all-busy-check-all.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl5/igt@perf_pmu@busy-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl6/igt@perf_pmu@semaphore-wait-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-glkb6/igt@perf_pmu@other-read-1.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl5/igt@perf_pmu@other-read-1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl3/igt@perf_pmu@most-busy-check-all-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl4/igt@perf_pmu@render-node-busy-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl6/igt@perf_pmu@busy-no-semaphores-bcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl7/igt@perf_pmu@idle-vcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl4/igt@perf_pmu@idle-vcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-snb2/igt@perf_pmu@idle-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl7/igt@perf_pmu@busy-no-semaphores-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl4/igt@perf_pmu@busy-check-all-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl3/igt@perf_pmu@busy-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl2/igt@perf_pmu@most-busy-check-all-bcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-snb2/igt@perf_pmu@most-busy-check-all-bcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl2/igt@perf_pmu@most-busy-check-all-bcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-glkb2/igt@perf_pmu@most-busy-check-all-bcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-apl6/igt@perf_pmu@frequency.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl7/igt@perf_pmu@frequency.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl2/igt@perf_pmu@idle-no-semaphores-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-snb3/igt@perf_pmu@idle-no-semaphores-rcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl1/igt@perf_pmu@idle-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl3/igt@perf_pmu@idle-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-hsw7/igt@perf_pmu@idle-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl6/igt@perf_pmu@other-read-2.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4005/shard-kbl5/igt@perf_pmu@busy-bcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl6/igt@perf_pmu@idle-no-semaphores-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl3/igt@perf_pmu@idle-no-semaphores-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-glkb3/igt@perf_pmu@idle-no-semaphores-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-hsw1/igt@perf_pmu@idle-no-semaphores-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-glkb1/igt@perf_pmu@busy-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl6/igt@perf_pmu@busy-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl8/igt@perf_pmu@busy-rcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-hsw2/igt@perf_pmu@busy-rcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl7/igt@perf_pmu@idle-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl4/igt@perf_pmu@idle-bcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-hsw6/igt@perf_pmu@idle-bcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl3/igt@perf_pmu@idle-bcs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-snb1/igt@perf_pmu@idle-bcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl5/igt@perf_pmu@render-node-busy-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-apl5/igt@perf_pmu@render-node-busy-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-glkb4/igt@perf_pmu@render-node-busy-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-hsw6/igt@perf_pmu@render-node-busy-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl1/igt@perf_pmu@event-wait-rcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-kbl7/igt@perf_pmu@most-busy-check-all-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl6/igt@perf_pmu@render-node-busy-bcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl5/igt@perf_pmu@busy-no-semaphores-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl2/igt@perf_pmu@other-read-3.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-glkb5/igt@perf_pmu@other-read-3.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl4/igt@perf_pmu@other-read-3.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-apl5/igt@perf_pmu@busy-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl5/igt@perf_pmu@busy-vecs0.html https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-hsw3/igt@perf_pmu@busy-vecs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl4/igt@perf_pmu@interrupts.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl6/igt@perf_pmu@idle-no-semaphores-vcs1.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4004/shard-kbl5/igt@perf_pmu@semaphore-wait-vcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-kbl3/igt@perf_pmu@render-node-busy-rcs0.html
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3372/shard-kbl5/igt@perf_pmu@semaphore-wait-rcs0.html
commit ee48700dd57d9ce783ec40f035b324d0b75632e4 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Nov 22 17:26:21 2017 +0000 drm/i915: Call i915_gem_init_userptr() before taking struct_mutex We don't need struct_mutex to initialise userptr (it just allocates a workqueue for itself etc), but we do need struct_mutex later on in i915_gem_init() in order to feed requests onto the HW.
The fix was integrated to CI_DRM_3373, the issue is not reproduced I will close and archive.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.