Bug 108864 - [CI][SHARDS] igt@pm_rpm@* - incomplete on shard-iclb6
Summary: [CI][SHARDS] igt@pm_rpm@* - incomplete on shard-iclb6
Status: CLOSED DUPLICATE of bug 108840
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: high normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
Depends on:
Blocks:
 
Reported: 2018-11-26 11:37 UTC by Martin Peres
Modified: 2018-12-28 08:40 UTC (History)
1 user (show)

See Also:
i915 platform: ICL
i915 features: power/runtime PM


Attachments

Description Martin Peres 2018-11-26 11:37:09 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4726/shard-iclb6/igt@pm_rpm@pm-caching.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5193/shard-iclb6/igt@pm_rpm@gem-mmap-gtt.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5192/shard-iclb6/igt@pm_rpm@gem-evict-pwrite.html

https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4725/shard-iclb6/igt@pm_rpm@universal-planes.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5155/shard-iclb6/igt@pm_rpm@reg-read-ioctl.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5144/shard-iclb6/igt@pm_rpm@dpms-mode-unset-lpsp.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5164/shard-iclb6/igt@pm_rpm@pc8-residency.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5164/shard-iclb6/igt@pm_rpm@gem-pread.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5177/shard-iclb6/igt@pm_rpm@debugfs-forcewake-user.html

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5180/shard-iclb6/igt@pm_rpm@gem-execbuf-stress-pc8.html

No obvious logs.


https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5198/shard-iclb6/igt@pm_rpm@system-suspend-devices.html

<1> [1197.862835] BUG: unable to handle kernel paging request at ffffffff00000000
<6> [1197.862847] PGD 5212067 P4D 5212067 PUD 0 
<4> [1197.862860] Oops: 0010 [#1] PREEMPT SMP PTI
<4> [1197.862871] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G     U            4.20.0-rc3-CI-CI_DRM_5198+ #1
<4> [1197.862877] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.2402.AD3.1810170014 10/17/2018


https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4726/shard-iclb6/igt@pm_rpm@fences-dpms.html

<0>[ 1312.708347] kworker/-3074    5.... 1311509695us : i915_gem_idle_work_handler: active_requests=1 (after switch-to-kernel-context)
<0>[ 1312.708375] kworker/-3074    5.... 1311509814us : i915_request_retire: rcs0 fence 5:73, global=1, current 1
<0>[ 1312.708403] kworker/-3074    5.... 1311509815us : i915_request_retire: marking (null) as inactive
<0>[ 1312.708431] kworker/-3074    5.... 1311509816us : i915_request_retire: __retire_engine_request(rcs0) fence 5:73, global=1, current 1
<0>[ 1312.708460] kworker/-3074    5.... 1311509828us : i915_gem_park: 
<0>[ 1312.708490] ksoftirq-9       0..s. 1311509872us : execlists_submission_tasklet: rcs0 awake?=1, active=1
<0>[ 1312.708520] ksoftirq-9       0d.s1 1311509874us : process_csb: rcs0 cs-irq head=3, tail=5
<0>[ 1312.708549] ksoftirq-9       0d.s1 1311509875us : process_csb: rcs0 csb[4]: status=0x10000001:0x00000000, active=0x1
<0>[ 1312.708578] ksoftirq-9       0d.s1 1311509876us : process_csb: rcs0 csb[5]: status=0x10000018:0x00000000, active=0x5
<0>[ 1312.708606] kworker/-3092    1.... 1311509878us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[ 1312.708636] ksoftirq-9       0d.s1 1311509878us : process_csb: rcs0 out[0]: ctx=0.1, global=1 (fence 5:73) (current 1), prio=-4094
<0>[ 1312.708665] kworker/-3092    1.... 1311509879us : i915_gem_idle_work_handler: active_requests=0 (after switch-to-kernel-context)
<0>[ 1312.708695] ksoftirq-9       0d.s1 1311509889us : process_csb: rcs0 completed ctx=0
<0>[ 1312.708723] kworker/-3092    1.... 1311509926us : __i915_gem_park: 
<0>[ 1312.708725] ---------------------------------
<4>[ 1312.708728] CR2: ffffffff00000000
<4>[ 1312.708731] ---[ end trace ecc673570091382d ]---


https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5153/shard-iclb6/igt@pm_rpm@sysfs-read.html

<4>[ 1147.152115]  ? process_csb+0x790/0x790 [i915]
<4>[ 1147.152116]  ? process_csb+0x790/0x790 [i915]
<4>[ 1147.152116]  ? module_assert_mutex_or_preemp
<4>[ 1147.152118] Lost 31 message(s)!
<4>[ 1147.152122] 
<4>[ 1147.152379] =============================
<4>[ 1147.152382] WARNING: suspicious RCU usage
<4>[ 1147.152385] 4.20.0-rc2-CI-CI_DRM_5153+ #1 Tainted: G     U  W        
<4>[ 1147.152388] -----------------------------
<4>[ 1147.152391] ./include/linux/rcupdate.h:609 rcu_read_lock() used illegally while idle!
<4>[ 1147.152394] 
<4>[ 1147.152394] other info that might help us debug this:
<4>[ 1147.152394] 
<4>[ 1147.152398] 
<4>[ 1147.152398] RCU used illegally from idle CPU!
<4>[ 1147.152398] rcu_scheduler_active = 2, debug_locks = 1
<4>[ 1147.152402] RCU used illegally from extended quiescent state!
<4>[ 1147.152406] 1 lock held by swapper/0/0:
<4>[ 1147.152408]  #0: 00000000dccdf396 (rcu_read_lock){....}, at: kmsg_dump+0x12/0x1c0
<4>[ 1147.152415] 
<4>[ 1147.152415] stack backtrace:
<4>[ 1147.152419] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G     U  W         4.20.0-rc2-CI-CI_DRM_5153+ #1
<4>[ 1147.152422] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.2402.AD3.1810170014 10/17/2018
<4>[ 1147.152425] Call Trace:
<4>[ 1147.152430]  dump_stack+0x67/0x9b
<4>[ 1147.152435]  kmsg_dump+0x180/0x1c0
<4>[ 1147.152440]  panic+0x13a/0x24d
<4>[ 1147.152446]  ? acpi_idle_enter+0x2a6/0x2b0
<4>[ 1147.152454]  __stack_chk_fail+0x10/0x10
<4>[ 1147.152458]  acpi_idle_enter+0x2a6/0x2b0
<4>[ 1147.152465]  cpuidle_enter_state+0x6a/0x340
<4>[ 1147.152472]  do_idle+0x1f3/0x260
<4>[ 1147.152478]  cpu_startup_entry+0x14/0x20
<4>[ 1147.152483]  start_kernel+0x4a9/0x4c9
<4>[ 1147.152490]  secondary_startup_64+0xa4/0xb0
Comment 1 Martin Peres 2018-11-26 11:39:07 UTC

*** This bug has been marked as a duplicate of bug 108840 ***


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.