Bug 111085 - [CI][SHARDS] igt@gem_userptr_blits@readonly-unsync - fail - WARNING: CPU: 2 PID: 1202 at mm/filemap.c:220 unaccount_page_cache_page
Summary: [CI][SHARDS] igt@gem_userptr_blits@readonly-unsync - fail - WARNING: CPU: 2 P...
Status: RESOLVED WORKSFORME
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: high normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
Depends on:
Blocks:
 
Reported: 2019-07-08 09:12 UTC by Martin Peres
Modified: 2019-07-12 12:39 UTC (History)
1 user (show)

See Also:
i915 platform: KBL
i915 features: GEM/Other


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Peres 2019-07-08 09:12:16 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6413/shard-kbl2/igt@runner@aborted.html

<7>[  104.673799] [IGT] gem_userptr_blits: executing
<7>[  104.676611] [IGT] gem_userptr_blits: starting subtest readonly-unsync
<4>[  110.333429] WARNING: CPU: 2 PID: 1202 at mm/filemap.c:220 unaccount_page_cache_page+0x1f3/0x280
<4>[  110.333450] Modules linked in: vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic i915 mei_hdcp x86_pkg_temp_thermal coretemp crct10dif_pclmul btusb btrtl btbcm btintel crc32_pclmul bluetooth snd_hda_intel snd_hda_codec ghash_clmulni_intel snd_hwdep snd_hda_core e1000e ecdh_generic ecc snd_pcm ptp pps_core mei_me mei prime_numbers
<4>[  110.333465] CPU: 2 PID: 1202 Comm: gem_userptr_bli Tainted: G     U            5.2.0-rc7-CI-CI_DRM_6413+ #1
<4>[  110.333466] Hardware name:  /NUC7i5BNB, BIOS BNKBL357.86A.0054.2017.1025.1822 10/25/2017
<4>[  110.333468] RIP: 0010:unaccount_page_cache_page+0x1f3/0x280
<4>[  110.333470] Code: a9 00 00 01 00 0f 84 eb fe ff ff be 15 00 00 00 48 89 df e8 ff 65 02 00 e9 d9 fe ff ff 48 89 f7 e8 32 42 09 00 e9 67 fe ff ff <0f> 0b 8b 05 d9 54 18 01 4c 8b 65 00 85 c0 75 2e 49 8b 94 24 80 01
<4>[  110.333471] RSP: 0018:ffffc90000ab7b50 EFLAGS: 00010002
<4>[  110.333472] RAX: 800000000008003f RBX: ffffea0008ea7040 RCX: 0000000000000002
<4>[  110.333474] RDX: 0000000000000000 RSI: 0000000000000014 RDI: ffffffff8213b24e
<4>[  110.333475] RBP: ffff88826bd0b008 R08: 0000000000000000 R09: 0000000000000001
<4>[  110.333476] R10: ffffc90000ab7b48 R11: 0000000000000000 R12: ffffffffffffffff
<4>[  110.333477] R13: 0000000000000000 R14: ffffea0008ea7040 R15: 0000000000000000
<4>[  110.333478] FS:  00007f5afc414e40(0000) GS:ffff888276b00000(0000) knlGS:0000000000000000
<4>[  110.333480] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[  110.333481] CR2: 00007f5afc02991c CR3: 0000000270362005 CR4: 00000000003606e0
<4>[  110.333482] Call Trace:
<4>[  110.333485]  __delete_from_page_cache+0x4b/0x250
<4>[  110.333489]  delete_from_page_cache+0x40/0x70
<4>[  110.333492]  truncate_inode_page+0x1d/0x30
<4>[  110.333495]  shmem_undo_range+0x1d5/0x890
<4>[  110.333507]  shmem_truncate_range+0x11/0x30
<4>[  110.333509]  shmem_evict_inode+0xe8/0x260
<4>[  110.333512]  ? evict+0xb5/0x190
<4>[  110.333516]  ? dput+0x20/0x2c0
<4>[  110.333518]  evict+0xcb/0x190
<4>[  110.333521]  __dentry_kill+0xca/0x190
<4>[  110.333523]  dentry_kill+0x4b/0x1b0
<4>[  110.333526]  ? dput+0x20/0x2c0
<4>[  110.333527]  dput+0x262/0x2c0
<4>[  110.333530]  __fput+0x102/0x220
<4>[  110.333534]  task_work_run+0x82/0xb0
<4>[  110.333538]  exit_to_usermode_loop+0x93/0xa0
<4>[  110.333540]  do_syscall_64+0x174/0x1c0
<4>[  110.333543]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  110.333544] RIP: 0033:0x7f5afb17fab7
<4>[  110.333546] Code: 10 e9 67 ff ff ff 0f 1f 44 00 00 48 8b 15 c9 f3 2c 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff e9 6b ff ff ff b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a1 f3 2c 00 f7 d8 64 89 01 48
<4>[  110.333547] RSP: 002b:00007ffdf1311508 EFLAGS: 00000217 ORIG_RAX: 000000000000000b
<4>[  110.333549] RAX: 0000000000000000 RBX: 00007f5afc43a000 RCX: 00007f5afb17fab7
<4>[  110.333550] RDX: 0000000000000000 RSI: 0000000000010000 RDI: 00007f5afc43a000
<4>[  110.333551] RBP: 00007ffdf1312c50 R08: 000056247666c5c0 R09: 00007ffdf13116b8
<4>[  110.333552] R10: 0000000000000000 R11: 0000000000000217 R12: 000000007fff0004
<4>[  110.333553] R13: 0000000080000000 R14: 00007f5a6f9ec000 R15: 0000000000000005
<4>[  110.333559] irq event stamp: 7746554
<4>[  110.333561] hardirqs last  enabled at (7746553): [<ffffffff819a90b4>] _raw_spin_unlock_irq+0x24/0x50
<4>[  110.333563] hardirqs last disabled at (7746554): [<ffffffff819a8eed>] _raw_spin_lock_irqsave+0xd/0x50
<4>[  110.333565] softirqs last  enabled at (7743430): [<ffffffff81c0033a>] __do_softirq+0x33a/0x4b9
<4>[  110.333567] softirqs last disabled at (7743423): [<ffffffff810b6499>] irq_exit+0xa9/0xc0
<4>[  110.333569] WARNING: CPU: 2 PID: 1202 at mm/filemap.c:220 unaccount_page_cache_page+0x1f3/0x280
<4>[  110.333570] ---[ end trace 3a807382323769d5 ]---
Comment 1 CI Bug Log 2019-07-08 09:13:55 UTC
The CI Bug Log issue associated to this bug has been updated.

### New filters associated

* KBL: igt@runner@aborted - fail - Previous test: gem_userptr_blits (readonly-unsync)
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_3240/shard-kbl3/igt@runner@aborted.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6413/shard-kbl2/igt@runner@aborted.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13508/shard-kbl7/igt@runner@aborted.html
Comment 2 Chris Wilson 2019-07-08 09:32:02 UTC
The complaint is that it saw a dirty page... But all dirt should have been flushed by the invalidate/truncate itself.

One possibility is that the rcu fput() is racing against shmemfs activity... Except we fput after we teardown the pages on freeing the object -- there should not be a race!
Comment 3 Chris Wilson 2019-07-08 09:37:51 UTC
So I wonder if this is our race at all.
Comment 4 Chris Wilson 2019-07-12 12:39:24 UTC
I haven't seen this since v5.2, so I am assuming it wasn't ours.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.