Bug 106728 - [CI] igt@kms_frontbuffer_tracking@fbc-suspend - dmesg-warn - Power well 2 on.
Summary: [CI] igt@kms_frontbuffer_tracking@fbc-suspend - dmesg-warn - Power well 2 on.
Status: CLOSED WORKSFORME
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: medium normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
Depends on:
Blocks:
 
Reported: 2018-05-30 13:27 UTC by Martin Peres
Modified: 2018-11-01 15:25 UTC (History)
1 user (show)

See Also:
i915 platform: GLK
i915 features: display/Other


Attachments

Description Martin Peres 2018-05-30 13:27:09 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4501/shard-glk2/igt@kms_frontbuffer_tracking@fbc-suspend.html

[  487.805280] ------------[ cut here ]------------
[  487.805282] Power well 2 on.
[  487.805351] WARNING: CPU: 1 PID: 1509 at drivers/gpu/drm/i915/intel_runtime_pm.c:455 bxt_enable_dc9+0xf2/0x120 [i915]
[  487.805353] Modules linked in: snd_hda_intel i915 vgem btusb x86_pkg_temp_thermal intel_powerclamp btrtl coretemp btbcm crct10dif_pclmul btintel crc32_pclmul snd_hda_codec_hdmi ghash_clmulni_intel bluetooth snd_hda_codec_realtek snd_hda_codec_generic ecdh_generic r8169 mii i2c_hid snd_hda_codec snd_hwdep snd_hda_core mei_me snd_pcm mei pinctrl_geminilake pinctrl_intel prime_numbers [last unloaded: i915]
[  487.805425] CPU: 1 PID: 1509 Comm: kworker/u8:1 Tainted: G     U            4.17.0-rc7-CI-CI_DRM_4256+ #1
[  487.805427] Hardware name: Intel Corporation NUC7CJYH/NUC7JYB, BIOS JYGLKCPX.86A.0027.2018.0125.1347 01/25/2018
[  487.805433] Workqueue: events_unbound async_run_entry_fn
[  487.805479] RIP: 0010:bxt_enable_dc9+0xf2/0x120 [i915]
[  487.805481] RSP: 0018:ffffc900002e3d88 EFLAGS: 00010286
[  487.805485] RAX: 0000000000000000 RBX: ffff8802694a0000 RCX: 0000000000000001
[  487.805487] RDX: 0000000080000001 RSI: ffffffff821232f9 RDI: 00000000ffffffff
[  487.805490] RBP: 0000000000000000 R08: 000000008bafc4f0 R09: 0000000000000000
[  487.805492] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880275da8008
[  487.805494] R13: ffffffff81506fb0 R14: 0000000000000000 R15: 0000000000000000
[  487.805497] FS:  0000000000000000(0000) GS:ffff88027fc80000(0000) knlGS:0000000000000000
[  487.805499] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  487.805502] CR2: 00007fa778f7f068 CR3: 0000000005210000 CR4: 0000000000340ee0
[  487.805504] Call Trace:
[  487.805549]  i915_drm_suspend_late+0x119/0x120 [i915]
[  487.805557]  dpm_run_callback+0x5d/0x2f0
[  487.805563]  __device_suspend_late+0xad/0x140
[  487.805569]  async_suspend_late+0x15/0x90
[  487.805573]  async_run_entry_fn+0x34/0x160
[  487.805579]  process_one_work+0x229/0x6a0
[  487.805588]  worker_thread+0x35/0x380
[  487.805594]  ? process_one_work+0x6a0/0x6a0
[  487.805598]  kthread+0x119/0x130
[  487.805601]  ? kthread_flush_work_fn+0x10/0x10
[  487.805608]  ret_from_fork+0x3a/0x50
[  487.805619] Code: 25 8d f9 e0 0f 0b e9 69 ff ff ff 80 3d ac fb 1f 00 00 0f 85 79 ff ff ff 48 c7 c7 fb 4a 23 a0 c6 05 98 fb 1f 00 01 e8 fe 8c f9 e0 <0f> 0b e9 5f ff ff ff 80 3d 84 fb 1f 00 00 0f 85 5f ff ff ff 48 
[  487.805741] irq event stamp: 84260
[  487.805745] hardirqs last  enabled at (84259): [<ffffffff810f9337>] vprintk_emit+0x4b7/0x4d0
[  487.805748] hardirqs last disabled at (84260): [<ffffffff81a0111c>] error_entry+0x7c/0x100
[  487.805752] softirqs last  enabled at (84178): [<ffffffff81c0032b>] __do_softirq+0x32b/0x4e1
[  487.805756] softirqs last disabled at (84155): [<ffffffff8108c284>] irq_exit+0xa4/0xb0
[  487.805799] WARNING: CPU: 1 PID: 1509 at drivers/gpu/drm/i915/intel_runtime_pm.c:455 bxt_enable_dc9+0xf2/0x120 [i915]
[  487.805801] ---[ end trace 69732fd816861b52 ]---
Comment 1 Martin Peres 2018-11-01 15:25:27 UTC
Seen twice with 20 runs difference. Last seen CI_DRM_4260_full (5 months / 2754 runs ago). Closing!


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.