Bug 112067 - [CI][SHARDS] igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists - incomplete - GEM_BUG_ON(atomic_read(&obj->mm.pages_pin_count) < atomic_read(&obj->bind_count))
Summary: [CI][SHARDS] igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execli...
Status: REOPENED
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: DRI git
Hardware: Other All
: medium major
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard:
Keywords:
: 112063 112175 (view as bug list)
Depends on:
Blocks:
 
Reported: 2019-10-18 13:53 UTC by Lakshmi
Modified: 2019-11-11 10:28 UTC (History)
1 user (show)

See Also:
i915 platform: BXT, SKL, TGL
i915 features: GEM/Other


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Lakshmi 2019-10-18 13:53:37 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7094/shard-apl4/igt@gem_linear_blits@interruptible.html
<0> [1428.635355] gem_line-6832    2.... 1418097772us : assert_bind_count.part.19: assert_bind_count:581 GEM_BUG_ON(atomic_read(&obj->mm.pages_pin_count) < atomic_read(&obj->bind_count))
<0> [1428.635386] ---------------------------------
<4> [1428.635799] ---[ end trace 5b9f41cd6d282331 ]---
<3> [1428.646156] BUG: sleeping function called from invalid context at kernel/sched/completion.c:99
<3> [1428.646182] in_atomic(): 0, irqs_disabled(): 0, non_block: 0, pid: 6832, name: gem_linear_blit
<4> [1428.646911] INFO: lockdep is turned off.
<3> [1428.646922] Preemption disabled at:
<4> [1428.646925] [<0000000000000000>] 0x0
<4> [1428.646946] CPU: 2 PID: 6832 Comm: gem_linear_blit Tainted: G     UD           5.4.0-rc2-CI-CI_DRM_7094+ #1
<4> [1428.646966] Hardware name:  /NUC6CAYB, BIOS AYAPLCEL.86A.0049.2018.0508.1356 05/08/2018
<4> [1428.646982] Call Trace:
<4> [1428.647021]  dump_stack+0x67/0x9b
<4> [1428.647034]  ___might_sleep+0x178/0x260
<4> [1428.647049]  wait_for_completion+0x37/0x1a0
<4> [1428.647067]  virt_efi_query_variable_info+0x161/0x1b0
<4> [1428.647084]  efi_query_variable_store+0xb3/0x1a0
<4> [1428.647105]  ? efivar_entry_set_safe+0x19c/0x220
<4> [1428.647144]  ? efi_delete_dummy_variable+0x90/0x90
<4> [1428.647159]  efivar_entry_set_safe+0x19c/0x220
<4> [1428.647177]  ? efi_pstore_write+0x10b/0x150
<4> [1428.647191]  efi_pstore_write+0x10b/0x150
<4> [1428.647214]  pstore_dump+0x127/0x340
<4> [1428.647232]  kmsg_dump+0x87/0x1c0
<4> [1428.647244]  oops_end+0x3e/0x90
<4> [1428.647254]  do_trap+0x80/0x100
<4> [1428.647369]  ? assert_bind_count.part.19+0x45/0x50 [i915]
<4> [1428.647389]  do_invalid_op+0x23/0x30
<4> [1428.647484]  ? assert_bind_count.part.19+0x45/0x50 [i915]
<4> [1428.647500]  invalid_op+0x23/0x30
<4> [1428.647594] RIP: 0010:assert_bind_count.part.19+0x45/0x50 [i915]
<4> [1428.647614] Code: 1a 98 f7 e0 48 8b 35 22 be 1d 00 49 c7 c0 78 85 2c a0 b9 45 02 00 00 48 c7 c2 80 5d 29 a0 48 c7 c7 70 17 1b a0 e8 db 92 fe e0 <0f> 0b 66 0f 1f 84 00 00 00 00 00 48 c7 c1 80 39 2f a0 ba 1b 01 00
<4> [1428.647650] RSP: 0018:ffffc900001cf828 EFLAGS: 00010282
<4> [1428.647663] RAX: 0000000000000018 RBX: ffff88825927bf40 RCX: 0000000000000000
<4> [1428.647678] RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffff888276bb0558
<4> [1428.647692] RBP: 0000000000000000 R08: 00000000000d106a R09: ffff888254dd0000
<4> [1428.647710] R10: 0000000000000000 R11: ffff888276bb0558 R12: 00000000ee91f000
<4> [1428.647728] R13: ffffc900001cf870 R14: ffff888267d4c480 R15: ffff88825c69ec90
<4> [1428.647838]  ? assert_bind_count.part.19+0x45/0x50 [i915]
<4> [1428.647935]  i915_vma_remove+0x126/0x130 [i915]
<4> [1428.648030]  __i915_vma_unbind.part.39+0xf8/0x460 [i915]
<4> [1428.648126]  i915_gem_evict_for_node+0x4a9/0x580 [i915]
<4> [1428.648225]  i915_gem_gtt_reserve+0xe5/0x260 [i915]
<4> [1428.648322]  i915_gem_gtt_insert+0x1ee/0x5c0 [i915]
<4> [1428.648420]  i915_vma_pin+0xa92/0xfc0 [i915]
<4> [1428.648515]  eb_lookup_vmas+0x323/0x1200 [i915]
<4> [1428.648611]  i915_gem_do_execbuffer+0x607/0x2410 [i915]
<4> [1428.648708]  ? i915_gem_execbuffer2_ioctl+0xc4/0x460 [i915]
<4> [1428.648729]  ? ___slab_alloc.constprop.90+0x780/0x7a0
<4> [1428.648748]  ? __lock_acquire+0x460/0x15d0
<4> [1428.648769]  ? __might_fault+0x39/0x90
<4> [1428.648861]  ? i915_gem_execbuffer_ioctl+0x300/0x300 [i915]
<4> [1428.648953]  i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [1428.649047]  ? i915_gem_execbuffer_ioctl+0x300/0x300 [i915]
<4> [1428.649066]  drm_ioctl_kernel+0xa7/0xf0
<4> [1428.649082]  drm_ioctl+0x2e1/0x390
<4> [1428.649172]  ? i915_gem_execbuffer_ioctl+0x300/0x300 [i915]
<4> [1428.649197]  do_vfs_ioctl+0xa0/0x6f0
<4> [1428.649214]  ? _copy_from_user+0x7a/0xa0
<4> [1428.649226]  ksys_ioctl+0x35/0x60
<4> [1428.649238]  __x64_sys_ioctl+0x11/0x20
<4> [1428.649249]  do_syscall_64+0x4f/0x210
<4> [1428.649261]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [1428.649273] RIP: 0033:0x7f12706005d7
<4> [1428.649284] Code: b3 66 90 48 8b 05 b1 48 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 48 2d 00 f7 d8 64 89 01 48
<4> [1428.649319] RSP: 002b:00007ffca17aff28 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
<4> [1428.649337] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f12706005d7
<4> [1428.649352] RDX: 00007ffca17aff80 RSI: 0000000040406469 RDI: 0000000000000005
<4> [1428.649366] RBP: 00007ffca17aff80 R08: 0000000000000030 R09: 0000000000000019
<4> [1428.649384] R10: 00000000ffffffe7 R11: 0000000000000246 R12: 0000000040406469
<4> [1428.649401] R13: 0000000000000005 R14: 000000000000032b R15: 00007ffca17b0030
Comment 1 CI Bug Log 2019-10-18 13:58:33 UTC
The CI Bug Log issue associated to this bug has been updated.

### New filters associated

* APL: igt@gem_linear_blits@interruptible - incomplete - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count))
  (No new failures associated)

* APL: igt@runner@aborted - fail - Previous test: gem_linear_blits (interruptible)
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7094/shard-apl4/igt@runner@aborted.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7105/shard-apl2/igt@runner@aborted.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7106/shard-apl7/igt@runner@aborted.html
Comment 2 Chris Wilson 2019-10-18 22:00:22 UTC
*** Bug 112063 has been marked as a duplicate of this bug. ***
Comment 3 Chris Wilson 2019-10-30 08:40:40 UTC
*** Bug 112175 has been marked as a duplicate of this bug. ***
Comment 4 Chris Wilson 2019-10-31 15:28:40 UTC
commit dde01d943559f6b853d97a2744433d9ad1b12ace (HEAD -> drm-intel-next-queued, drm-intel/drm-intel-next-queued)
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Wed Oct 30 19:21:49 2019 +0000

    drm/i915: Split detaching and removing the vma
    
    In order to keep the assert_bind_count() valid, we need to hold the vma
    page reference until after we drop the bind count. However, we must also
    keep the drm_mm_remove_node() as the last action of i915_vma_unbind() so
    that it serialises with the unlocked check inside i915_vma_destroy(). So
    we need to split up i915_vma_remove() so that we order the detach, drop
    pages and remove as required during unbind.
    
    Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112067
    Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: Matthew Auld <matthew.auld@intel.com>
    Reviewed-by: Matthew Auld <matthew.auld@intel.com>
    Link: https://patchwork.freedesktop.org/patch/msgid/20191030192159.18404-1-chris@chris-wilson.co.uk
Comment 5 CI Bug Log 2019-11-05 10:22:46 UTC
A CI Bug Log filter associated to this bug has been updated:

{- APL: igt@gem_linear_blits@interruptible - incomplete - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count)) -}
{+ BXT APL: igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists - incomplete/timeout - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count)) +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5260/fi-bxt-dsi/igt@i915_selftest@live_execlists.html
Comment 6 Lakshmi 2019-11-05 10:23:38 UTC
(In reply to CI Bug Log from comment #5)
> A CI Bug Log filter associated to this bug has been updated:
> 
> {- APL: igt@gem_linear_blits@interruptible - incomplete -
> GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt;
> atomic_read(&amp;obj-&gt;bind_count)) -}
> {+ BXT APL:
> igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists -
> incomplete/timeout - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count)
> &lt; atomic_read(&amp;obj-&gt;bind_count)) +}
> 
> New failures caught by the filter:
> 
>   *
> https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5260/fi-bxt-dsi/
> igt@i915_selftest@live_execlists.html

Reopened due to this failure.
Comment 7 CI Bug Log 2019-11-07 09:18:44 UTC
A CI Bug Log filter associated to this bug has been updated:

{- BXT APL: igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists - incomplete/timeout - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count)) -}
{+ BXT APL: igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists - incomplete/timeout - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count)) +}


  No new failures caught with the new filter
Comment 8 CI Bug Log 2019-11-07 09:20:09 UTC
A CI Bug Log filter associated to this bug has been updated:

{- BXT APL: igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists - incomplete/timeout - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count)) -}
{+ BXT APL SKL TGL: igt@gem_linear_blits@interruptible|igt@i915_selftest@live_execlists - incomplete/timeout - GEM_BUG_ON(atomic_read(&amp;obj-&gt;mm.pages_pin_count) &lt; atomic_read(&amp;obj-&gt;bind_count)) +}


  No new failures caught with the new filter


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.