Bug 110690 - [CI][DRMTIP] igt@kms_plane_lowres@pipe-* - dmesg-warn - pipe_off wait timed out
Summary: [CI][DRMTIP] igt@kms_plane_lowres@pipe-* - dmesg-warn - pipe_off wait timed out
Status: CLOSED WORKSFORME
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: DRI git
Hardware: Other All
: medium normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-05-16 13:51 UTC by Lakshmi
Modified: 2019-08-30 10:11 UTC (History)
2 users (show)

See Also:
i915 platform: ICL
i915 features: display/Other


Attachments

Description Lakshmi 2019-05-16 13:51:15 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_281/fi-icl-u2/igt@kms_plane_lowres@pipe-b-tiling-x.html

https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_279/fi-icl-u2/igt@kms_plane_lowres@pipe-b-tiling-yf.html

<4> [387.629241] ------------[ cut here ]------------
<4> [387.629243] pipe_off wait timed out
<4> [387.629292] WARNING: CPU: 7 PID: 1482 at drivers/gpu/drm/i915/intel_display.c:1081 intel_disable_pipe+0x1c3/0x270 [i915]
<4> [387.629293] Modules linked in: vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic mei_hdcp i915 x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_intel e1000e snd_hda_codec cdc_ether usbnet ptp mii snd_hwdep pps_core snd_hda_core snd_pcm mei_me i2c_i801 mei prime_numbers btusb btrtl btbcm btintel bluetooth ecdh_generic
<4> [387.629310] CPU: 7 PID: 1482 Comm: kms_plane_lowre Tainted: G     U            5.1.0-g018780091ca3-drmtip_279+ #1
<4> [387.629311] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
<4> [387.629345] RIP: 0010:intel_disable_pipe+0x1c3/0x270 [i915]
<4> [387.629346] Code: 6a 00 8d b4 0a 08 00 07 00 31 c9 ba 00 00 00 40 e8 c2 56 f5 ff 85 c0 5a 0f 84 ea fe ff ff 48 c7 c7 af 21 6f c0 e8 5d df ab eb <0f> 0b e9 d7 fe ff ff 65 ff 05 6f 16 a2 3f 48 8b 05 20 15 19 00 e8
<4> [387.629348] RSP: 0018:ffff94a580633aa0 EFLAGS: 00010286
<4> [387.629350] RAX: 0000000000000000 RBX: ffff8dcc80540000 RCX: 0000000000000000
<4> [387.629351] RDX: 0000000000000007 RSI: ffff8dcc9db248e0 RDI: 00000000ffffffff
<4> [387.629352] RBP: 0000000000071008 R08: 00000000fbccf35a R09: 0000000000000000
<4> [387.629353] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8dcc80540d20
<4> [387.629355] R13: ffff8dcc989b67e8 R14: 0000000000000001 R15: ffff8dcc9956d400
<4> [387.629356] FS:  00007fc5a0b9c980(0000) GS:ffff8dcc9ffc0000(0000) knlGS:0000000000000000
<4> [387.629358] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4> [387.629359] CR2: 00007fc5a0bd6000 CR3: 0000000493080005 CR4: 0000000000760ee0
<4> [387.629360] PKRU: 55555554
<4> [387.629361] Call Trace:
<4> [387.629396]  haswell_crtc_disable+0xce/0x120 [i915]
<4> [387.629431]  intel_atomic_commit_tail+0x932/0x1340 [i915]
<4> [387.629470]  intel_atomic_commit+0x240/0x2e0 [i915]
<4> [387.629475]  drm_mode_atomic_ioctl+0x858/0x940
<4> [387.629484]  ? drm_atomic_set_property+0x950/0x950
<4> [387.629486]  drm_ioctl_kernel+0x83/0xf0
<4> [387.629489]  drm_ioctl+0x2f3/0x3b0
<4> [387.629492]  ? drm_atomic_set_property+0x950/0x950
<4> [387.629498]  ? __lock_acquire+0x49f/0x1590
<4> [387.629504]  do_vfs_ioctl+0xa0/0x6e0
<4> [387.629507]  ? __task_pid_nr_ns+0xb9/0x1f0
<4> [387.629511]  ksys_ioctl+0x35/0x60
<4> [387.629514]  __x64_sys_ioctl+0x11/0x20
<4> [387.629516]  do_syscall_64+0x55/0x190
<4> [387.629520]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [387.629522] RIP: 0033:0x7fc5a044c5d7
<4> [387.629523] Code: b3 66 90 48 8b 05 b1 48 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 48 2d 00 f7 d8 64 89 01 48
<4> [387.629525] RSP: 002b:00007ffe05364ed8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
<4> [387.629526] RAX: ffffffffffffffda RBX: 00005615ccf4bc80 RCX: 00007fc5a044c5d7
<4> [387.629528] RDX: 00007ffe05364f30 RSI: 00000000c03864bc RDI: 0000000000000005
<4> [387.629529] RBP: 00007ffe05364f30 R08: 00005615ccf85ec0 R09: 00000000000000d2
<4> [387.629530] R10: 0000000000000001 R11: 0000000000000246 R12: 00000000c03864bc
<4> [387.629532] R13: 0000000000000005 R14: 0000000000000000 R15: 0000000000000400
<4> [387.629538] irq event stamp: 405648
<4> [387.629541] hardirqs last  enabled at (405647): [<ffffffffac128db7>] console_unlock+0x3f7/0x5a0
<4> [387.629542] hardirqs last disabled at (405648): [<ffffffffac0019b0>] trace_hardirqs_off_thunk+0x1a/0x1c
<4> [387.629544] softirqs last  enabled at (404352): [<ffffffffacc0033a>] __do_softirq+0x33a/0x4b9
<4> [387.629547] softirqs last disabled at (404345): [<ffffffffac0b93b9>] irq_exit+0xa9/0xc0
<4> [387.629578] WARNING: CPU: 7 PID: 1482 at drivers/gpu/drm/i915/intel_display.c:1081 intel_disable_pipe+0x1c3/0x270 [i915]
<4> [387.629579] ---[ end trace b9a16d9a10bdc28e ]---
Comment 1 CI Bug Log 2019-05-16 13:56:22 UTC
The CI Bug Log issue associated to this bug has been updated.

### New filters associated

* ICL: igt@kms_plane_lowres@pipe-* - dmesg-warn - pipe_off wait timed out
  (No new failures associated)

* ICL:  igt@runner@aborted -fail -  Previous test: kms_plane_lowres (pipe-b-tiling-yf)
  (No new failures associated)
Comment 2 Jani Saarinen 2019-05-24 09:56:28 UTC
Vandita, can you look this or this this known issue?
Comment 3 Lakshmi 2019-05-28 12:46:55 UTC
The reproduction rate of this failure is 100% on icl-dsi.
Comment 4 Lakshmi 2019-05-28 13:03:29 UTC
(In reply to Lakshmi from comment #3)
> The reproduction rate of this failure is 100% on icl-dsi.

Correction: All the failures listed in the CI bug log under this bug is related to igt@runner@aborted.

This failure has occurred only twice as mentioned in comment 1. But not seen later.
Comment 5 Mika Kahola 2019-05-28 13:06:20 UTC
The test does modesets by switching from higher resolution to a lower resolution and back. It may be the case that we try to switch back and forth modes too quickly and some delay would be required on the test case to settle things down.
Comment 6 Mika Kahola 2019-05-28 13:17:25 UTC
The dmesg shows that later on we are able to disable the pipe again within timeout window. I consider that the impact for the user is minimal. Maybe it just takes a while longer to disable the pipe.
Comment 7 Arek Hiler 2019-08-30 10:05:56 UTC
The issues was seen on average twice a week and disappeared completely about 7 weeks ago. 0.5 week * 10 = 5 weeks, so closing due to 10x reproduction rate rule.
Comment 8 Lakshmi 2019-08-30 10:11:51 UTC
(In reply to Arek Hiler from comment #7)
> The issues was seen on average twice a week and disappeared completely about
> 7 weeks ago. 0.5 week * 10 = 5 weeks, so closing due to 10x reproduction
> rate rule.

Archiving the issue.
Comment 9 CI Bug Log 2019-08-30 10:11:55 UTC
The CI Bug Log issue associated to this bug has been archived.

New failures matching the above filters will not be associated to this bug anymore.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.