Bug 108840 - [CI][BAT] igt@*pm_rpm@* - incomplete
Summary: [CI][BAT] igt@*pm_rpm@* - incomplete
Status: RESOLVED MOVED
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: high normal
Assignee: Anshuman Gupta
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
: 108768 108864 108990 (view as bug list)
Depends on:
Blocks: 109507
  Show dependency treegraph
 
Reported: 2018-11-22 16:04 UTC by Lakshmi
Modified: 2019-11-29 18:00 UTC (History)
6 users (show)

See Also:
i915 platform: ICL
i915 features: power/runtime PM


Attachments
DMESG logs (6.10 MB, text/plain)
2019-04-04 13:46 UTC, Anshuman Gupta
no flags Details
pstore panic on ext4 (38.98 KB, text/x-log)
2019-06-18 09:59 UTC, Juha-Pekka Heikkilä
no flags Details

Comment 1 Chris Wilson 2018-11-22 21:39:38 UTC
acpi idle gpf?
Comment 2 Chris Wilson 2018-11-22 21:41:44 UTC
*** Bug 108768 has been marked as a duplicate of this bug. ***
Comment 3 Martin Peres 2018-11-26 11:39:07 UTC
*** Bug 108864 has been marked as a duplicate of this bug. ***
Comment 4 Lakshmi 2018-12-03 12:48:35 UTC
Last seen this issue CI_DRM_5236_full (1 day, 22 hours / 5 runs ago).
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5236/shard-iclb6/igt@pm_rpm@cursor-dpms.html
Comment 6 Anshuman Gupta 2018-12-18 11:44:00 UTC
(In reply to Lakshmi from comment #5)
> One more instance
> https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_168/fi-icl-u3/
> igt@pm_rpm@debugfs-read.html

It seems i am able to reproduce the issue multiple times, but not able to get the page fault crash dmesg log, my system is resetting even before printing any dmesg log, So not sure if crash is same as page fault in acpi idle. 

As i see acpi idle gpf crash logs are available for CI_DRM_5180 and CI_DRM_5184.
From where i can get the vmlinux kernel object file and System.map file for CI_DRM_5180 and CI_DRM_5184. It will helpful to debug the page fault in acpi_idle_enter code.
Comment 7 Anshuman Gupta 2019-01-02 12:23:15 UTC
(In reply to Lakshmi from comment #5)
> One more instance
> https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_168/fi-icl-u3/
> igt@pm_rpm@debugfs-read.html

ICL CPU id family patches are not merged to up streamed mainline kernel, due to that ICL H/W still using ACPI idle driver, This ACPI idle page fault will not be there with intel_idle driver, this issue should be fixed once ICL CPU id patches will be public.
Comment 8 CI Bug Log 2019-02-04 16:03:51 UTC
A CI Bug Log filter associated to this bug has been updated:

{- shard-iclb6  fi-icl-u3: igt@pm_rpm@* - incomplete -}
{+ shard-iclb6  fi-icl-u3 shard-iclb1: igt@pm_rpm@* - incomplete +}

New failures caught by the filter:

* https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5519/shard-iclb1/igt@pm_rpm@modeset-lpsp-stress.html
Comment 9 Anshuman Gupta 2019-02-11 08:27:23 UTC
This crash is different from earlier crashes, earlier crashes were in to acpi idle driver, but this is different one. Looks a different bug to  me.
Comment 10 Lakshmi 2019-02-12 07:51:42 UTC
I believe this error is related to this bug.
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5590/fi-icl-y/igt@pm_rpm@module-reload.html
Comment 11 Lakshmi 2019-02-12 07:54:10 UTC
(In reply to Anshuman Gupta from comment #9)
> This crash is different from earlier crashes, earlier crashes were in to
> acpi idle driver, but this is different one. Looks a different bug to  me.

Thanks for your comment here. I will create a new bug for this failure if I don't find suitable existing bug for this failure.
Comment 12 CI Bug Log 2019-02-12 07:55:19 UTC
A CI Bug Log filter associated to this bug has been updated:

{- shard-iclb6  fi-icl-u3 shard-iclb1: igt@pm_rpm@* - incomplete -}
{+ fi-icl-y shard-iclb6  fi-icl-u3 shard-iclb1: igt@pm_rpm@* - incomplete +}

New failures caught by the filter:

* https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5590/fi-icl-y/igt@pm_rpm@module-reload.html
Comment 13 Lakshmi 2019-02-12 07:57:16 UTC
Setting the priority as highest as the failure seen in BAT.
Comment 14 Anshuman Gupta 2019-02-13 09:17:48 UTC
(In reply to Lakshmi from comment #13)
> Setting the priority as highest as the failure seen in BAT.

(In reply to CI Bug Log from comment #12)
> A CI Bug Log filter associated to this bug has been updated:
> 
> {- shard-iclb6  fi-icl-u3 shard-iclb1: igt@pm_rpm@* - incomplete -}
> {+ fi-icl-y shard-iclb6  fi-icl-u3 shard-iclb1: igt@pm_rpm@* - incomplete +}
> 
> New failures caught by the filter:
> 
> *
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5590/fi-icl-y/
> igt@pm_rpm@module-reload.html

No information is available in this log, considering the result incomplete it is either a kernel crash or hung but no such information about panic or hung is available from logs.
Comment 15 CI Bug Log 2019-03-04 07:30:25 UTC
A CI Bug Log filter associated to this bug has been updated:

{- fi-icl-y shard-iclb6  fi-icl-u3 shard-iclb1: igt@pm_rpm@* - incomplete -}
{+ ICL: igt@*pm_rpm@* - incomplete +}

New failures caught by the filter:

* https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4868/shard-iclb4/igt@i915_pm_rpm@gem-execbuf-stress-extra-wait.html
Comment 16 Lakshmi 2019-03-22 08:39:18 UTC
Anshuman, are there any changes done recently to address this bug? Last seen drmtip_244 (5 days, 11 hours / 87 runs ago)on fi-icl-y.
If not fi-icl-y, last seen on shards IGT_4878_full (1 week, 6 days / 168 runs ago).
Comment 17 Lakshmi 2019-04-03 07:41:29 UTC
Update: Anshuman will try to reproduce this issue with latest BIOS to get more information about his issue.
Comment 18 Martin Peres 2019-04-03 07:50:54 UTC
Seems like the common denominator is that i915_gem_wait_for_idle() hangs (or never completes), which triggers a hang, which reboots the machine because CI sets panic=1.

There is an assumption that a page fault happening in the ACPI idle driver might be responsible for this, but I'm CC:ing Francesco to verify that the logic is sound with his engineers, while waiting for the ACPI driver fixes to be ready and landed in Linux.
Comment 19 Martin Peres 2019-04-03 13:10:53 UTC
(In reply to Martin Peres from comment #18)
> Seems like the common denominator is that i915_gem_wait_for_idle() hangs (or
> never completes), which triggers a hang, which reboots the machine because
> CI sets panic=1.
> 
> There is an assumption that a page fault happening in the ACPI idle driver
> might be responsible for this, but I'm CC:ing Francesco to verify that the
> logic is sound with his engineers, while waiting for the ACPI driver fixes
> to be ready and landed in Linux.

Daniel Vetter suggests that we should write a patch for core-for-CI that disables the ACPI driver for ICL only. Who is a taker?
Comment 20 Lakshmi 2019-04-04 07:58:09 UTC
Anshuman, any updates here?
Comment 21 Anshuman Gupta 2019-04-04 08:48:27 UTC
(In reply to Martin Peres from comment #19)
> (In reply to Martin Peres from comment #18)
> > Seems like the common denominator is that i915_gem_wait_for_idle() hangs (or
> > never completes), which triggers a hang, which reboots the machine because
> > CI sets panic=1.
> > 
> > There is an assumption that a page fault happening in the ACPI idle driver
> > might be responsible for this, but I'm CC:ing Francesco to verify that the
> > logic is sound with his engineers, while waiting for the ACPI driver fixes
> > to be ready and landed in Linux.
> 
> Daniel Vetter suggests that we should write a patch for core-for-CI that
> disables the ACPI driver for ICL only. Who is a taker?

you mean to disable ACPI idle driver or disable CPU idle completely?
Comment 22 Lakshmi 2019-04-04 12:46:15 UTC
(In reply to Anshuman Gupta from comment #21)
> (In reply to Martin Peres from comment #19)
> > (In reply to Martin Peres from comment #18)
> > > Seems like the common denominator is that i915_gem_wait_for_idle() hangs (or
> > > never completes), which triggers a hang, which reboots the machine because
> > > CI sets panic=1.
> > > 
> > > There is an assumption that a page fault happening in the ACPI idle driver
> > > might be responsible for this, but I'm CC:ing Francesco to verify that the
> > > logic is sound with his engineers, while waiting for the ACPI driver fixes
> > > to be ready and landed in Linux.
> > 
> > Daniel Vetter suggests that we should write a patch for core-for-CI that
> > disables the ACPI driver for ICL only. Who is a taker?
> 
> you mean to disable ACPI idle driver or disable CPU idle completely?

Anshuman, Disable the ACPI idle driver.
Comment 23 Anshuman Gupta 2019-04-04 13:42:09 UTC
(In reply to Lakshmi from comment #20)
> Anshuman, any updates here?

We ran the test on our local setup with latest drm tip and latest BIOS V3121.
i915_pm_rpm doesn't complete and there is hard lockup after running igt subtest "gem-execbuf-stress" there is hard lock up.
 
[ 1496.982609] e1000e 0000:ad:00.0 enp173s0: Detected Hardware Unit Hang:
                 TDH                  <0>
                 TDT                  <2>
                 next_to_use          <2>
                 next_to_clean        <0>
               buffer_info[next_to_clean]:
                 time_stamp           <100048cbb>
                 next_to_watch        <0>
                 jiffies              <1000490c0>
                 next_to_watch.status <0>
               MAC Status             <383>
               PHY Status             <792d>
               PHY 1000BASE-T Status  <3800>

This hang is believed to originated from system-suspend and system-hibernate sub test. I wonder if this is similar to CI hang or different hang.
Comment 24 Anshuman Gupta 2019-04-04 13:46:21 UTC
Created attachment 143861 [details]
DMESG logs

BA setup logs
Comment 25 Chris Wilson 2019-04-04 14:10:35 UTC
(In reply to Anshuman Gupta from comment #23)
> (In reply to Lakshmi from comment #20)
> > Anshuman, any updates here?
> 
> We ran the test on our local setup with latest drm tip and latest BIOS V3121.
> i915_pm_rpm doesn't complete and there is hard lockup after running igt
> subtest "gem-execbuf-stress" there is hard lock up.
>  
> [ 1496.982609] e1000e 0000:ad:00.0 enp173s0: Detected Hardware Unit Hang:
>                  TDH                  <0>
>                  TDT                  <2>
>                  next_to_use          <2>
>                  next_to_clean        <0>
>                buffer_info[next_to_clean]:
>                  time_stamp           <100048cbb>
>                  next_to_watch        <0>
>                  jiffies              <1000490c0>
>                  next_to_watch.status <0>
>                MAC Status             <383>
>                PHY Status             <792d>
>                PHY 1000BASE-T Status  <3800>

Keep filing those to e1000e maintainers.

> This hang is believed to originated from system-suspend and system-hibernate
> sub test. I wonder if this is similar to CI hang or different hang.

We can't tell, if netconsole is unreliable due to e1000e, use serial console.
Comment 26 Anshuman Gupta 2019-04-08 10:09:56 UTC
(In reply to Martin Peres from comment #19)
> (In reply to Martin Peres from comment #18)
> > Seems like the common denominator is that i915_gem_wait_for_idle() hangs (or
> > never completes), which triggers a hang, which reboots the machine because
> > CI sets panic=1.
> > 
> > There is an assumption that a page fault happening in the ACPI idle driver
> > might be responsible for this, but I'm CC:ing Francesco to verify that the
> > logic is sound with his engineers, while waiting for the ACPI driver fixes
> > to be ready and landed in Linux.
> 
> Daniel Vetter suggests that we should write a patch for core-for-CI that
> disables the ACPI driver for ICL only. Who is a taker?

patch is ready, if we disable ACPI idle driver, there is no idle driver available for ICL, so as consequence of it cpu idle also gets disabled completely. 

As it is a hack, is it ok to send this patch to Intel-gfx <intelgfx-@lists.freedesktop.org>  ?
Comment 27 Jani Saarinen 2019-04-08 10:49:25 UTC
BIOS was updated on ICL-U3. 
See: 
<6>[    0.000000] DMI: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
Comment 28 Martin Peres 2019-04-08 11:08:19 UTC
(In reply to Anshuman Gupta from comment #26)
> (In reply to Martin Peres from comment #19)
> > (In reply to Martin Peres from comment #18)
> > > Seems like the common denominator is that i915_gem_wait_for_idle() hangs (or
> > > never completes), which triggers a hang, which reboots the machine because
> > > CI sets panic=1.
> > > 
> > > There is an assumption that a page fault happening in the ACPI idle driver
> > > might be responsible for this, but I'm CC:ing Francesco to verify that the
> > > logic is sound with his engineers, while waiting for the ACPI driver fixes
> > > to be ready and landed in Linux.
> > 
> > Daniel Vetter suggests that we should write a patch for core-for-CI that
> > disables the ACPI driver for ICL only. Who is a taker?
> 
> patch is ready, if we disable ACPI idle driver, there is no idle driver
> available for ICL, so as consequence of it cpu idle also gets disabled
> completely. 
> 
> As it is a hack, is it ok to send this patch to Intel-gfx
> <intelgfx-@lists.freedesktop.org>  ?

Yes, please explain why you are doing this, and that this is for the core-for-CI, and ICL-only. This will give you some testing and you can get comments also on this :)
Comment 29 Anshuman Gupta 2019-04-08 13:14:48 UTC
(In reply to Jani Saarinen from comment #27)
> BIOS was updated on ICL-U3. 
> See: 
> <6>[    0.000000] DMI: Intel Corporation Ice Lake Client Platform/IceLake U
> DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019

Issue is observed even with BIOS 3121 at BA.

[38218.404599] Sending NMI from CPU 0 to CPUs 1-7:
[38218.409133] NMI backtrace for cpu 4 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.409191] NMI backtrace for cpu 6 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.409192] NMI backtrace for cpu 2 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.409210] NMI backtrace for cpu 5 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.409211] NMI backtrace for cpu 1 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.409253] NMI backtrace for cpu 7 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.409254] NMI backtrace for cpu 3 skipped: idling at acpi_processor_ffh_cstate_enter+0x69/0xb0
[38218.410248] Kernel panic - not syncing: hung_task: blocked tasks
[38218.476356] Dumping ftrace buffer:
[38218.479748] ---------------------------------
[38218.484092] CPU:1 [LOST 5491 EVENTS]
[38218.484092] i915_pm_-18748   1d..1 31453892539us : 0xffffffffa0518b5d: rcs0 in[0]:  ctx=64.1, fence 8b9:44 (current 42), prio=6
[38218.499045] kworker/-19088   1.... 31570414263us : 0xffffffffa04f0ab8: rcs0 fence 8b9:80, current 80
[38218.508127] kworker/-19088   1.... 31570414276us : 0xffffffffa04f0d51: __retire_engine_request(rcs0) fence 8b9:80, current 80
[38218.519363] kworker/-19088   1.... 31570414278us : 0xffffffffa04d6095: 
[38218.525945] kworker/-19088   1.... 31570517192us : 0xffffffffa04b6765: awake?=yes
[38218.533388] kworker/-19088   1.... 31570517240us : 0xffffffffa04f3295: rcs0 fence 89c:96
[38218.541433] kworker/-19088   1d..1 31570517275us : 0xffffffffa0517826: rcs0 cs-irq head=1, tail=1
[38218.550259] kworker/-19088   1d..1 31570517277us : 0xffffffffa04f2fa2: rcs0 fence 89c:96 -> current 94
[38218.559512] kworker/-19088   1d..1 31570517294us : 0xffffffffa0518b5d: rcs0 in[0]:  ctx=0.1, fence 89c:96 (current 94), prio=-8186
[38218.571179] kworker/-19088   1.... 31570517300us : 0xffffffffa04da095: flags=12 (locked), timeout=200
[38218.580344] kworker/-19088   1.... 31570519413us : 0xffffffffa04f0ab8: rcs0 fence 89c:96, current 96
[38218.589426] kworker/-19088   1.... 31570519416us : 0xffffffffa04f0d51: __retire_engine_request(rcs0) fence 89c:96, current 96
[38218.600664] kworker/-19088   1.... 31570519445us : 0xffffffffa04d45e5: 
[38218.607245] i915_pm_-18748   1.... 31645033342us : 0xffffffffa04e27e5: 
[38218.613825] i915_pm_-18748   1.... 31645033352us : 0xffffffffa04b6765: awake?=no
[38218.621179] i915_pm_-18748   1.... 31645033354us : 0xffffffffa04da095: flags=12 (locked), timeout=200
[38218.630345] i915_pm_-18748   1.... 31645033358us : 0xffffffffa04da095: flags=12 (locked), timeout=9223372036854775807 (forever)
[38218.641756] i915_pm_-18748   1.... 31645688983us : 0xffffffffa04e2745: 
[38218.648339] i915_pm_-18748   1.... 31645689156us : 0xffffffffa050bc95: 
[38218.654917] i915_pm_-18748   1.... 31645689158us : 0xffffffffa047efeb: engine_mask=ff
[38218.662728] kworker/-19601   1.... 32641702411us : i915_gem_sanitize: 
[38218.669257] kworker/-19601   1.... 32641702533us : intel_engines_sanitize: 
[38218.676205] kworker/-19601   1.... 32641702535us : intel_gpu_reset: engine_mask=ff
[38218.683764] kworker/-19601   1.... 32641703919us : i915_gem_resume: 
[38218.690120] kworker/-19616   1.... 32648456199us : intel_engines_sanitize: 
[38218.697070] kworker/-19616   1.... 32648456562us : intel_gpu_reset: engine_mask=ff
[38218.704630]  rtcwake-19715   1.... 32656291853us : i915_gem_suspend: 
[38218.711065] kworker/-19587   1.... 32657423968us : i915_gem_sanitize: 
[38218.717591] kworker/-19587   1.... 32657424001us : intel_engines_sanitize: 
[38218.724541] kworker/-19587   1.... 32657424004us : intel_gpu_reset: engine_mask=ff
[38218.732098] kworker/-19587   1.... 32657426000us : i915_gem_resume:
Comment 30 Lakshmi 2019-04-09 07:32:17 UTC
(In reply to Anshuman Gupta from comment #29)
> (In reply to Jani Saarinen from comment #27)
> > BIOS was updated on ICL-U3. 
> > See: 
> > <6>[    0.000000] DMI: Intel Corporation Ice Lake Client Platform/IceLake U
> > DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
> 
> Issue is observed even with BIOS 3121 at BA.

@Anshuman/Jani/Francesco, What are the next steps?
Comment 31 Jani Saarinen 2019-04-09 11:23:58 UTC
Get this merged: https://patchwork.freedesktop.org/series/59170/
Comment 32 Lakshmi 2019-04-10 05:40:12 UTC
(In reply to Jani Saarinen from comment #31)
> Get this merged: https://patchwork.freedesktop.org/series/59170/

Review Comments from Rafael J. Wysocki:
This is fine only as long as it doesn't anywhere close to the mainline.

If ACPI idle crashes on new Intel HW, it needs to be fixed to work with 
it instead of refusing to work on it.
Comment 33 Lakshmi 2019-04-10 20:09:39 UTC
How do we proceed with this bug?
Comment 34 Jani Saarinen 2019-04-11 04:01:42 UTC
I would resolve it now as patch there: https://cgit.freedesktop.org/drm-intel/commit/?h=topic/core-for-CI&id=b573fba52f339dc4fadef7282af4a9413fd6173d
but we cannot close this until real fix in upstream from core team.
Comment 35 Martin Peres 2019-04-11 11:42:39 UTC
*** Bug 108990 has been marked as a duplicate of this bug. ***
Comment 36 Lakshmi 2019-05-11 13:33:41 UTC
(In reply to Jani Saarinen from comment #34)
> I would resolve it now as patch there:
> https://cgit.freedesktop.org/drm-intel/commit/?h=topic/core-for-
> CI&id=b573fba52f339dc4fadef7282af4a9413fd6173d
> but we cannot close this until real fix in upstream from core team.

This failure is still happening on drmtip and shard runs. Reopening this bug.
Needs further investigation.
Comment 37 Anshuman Gupta 2019-05-13 04:31:16 UTC
I have checked logs of  CI Bug Log , from recent logs, All of these recent issues are not having any pstore crashes logs.
So it is hard to state that if it is a similar or different issue.
 
I can only see the same issue pattern (i915_gem_wait_for_idle timeout) to 3 weeks older issue on CI_DRM_5960_full with below details from CI Bug Log.

IGT: igt@i915_pm_rpm@universal-planes-dpms 
shard-iclb5 
IGT: incomplete (external URL) 
CI_DRM_5960_full (3 weeks, 2 days old) 
None for the issue, None for the filter
Issue: fdo#108840 - [CI][BAT] igt@*pm_rpm@* - incomplete - ICL: igt@*pm_rpm@* - incomplete

@Lakshmi please assign this to Gem Core side.
Comment 38 Chris Wilson 2019-05-13 12:32:38 UTC
(In reply to Anshuman Gupta from comment #37)
> @Lakshmi please assign this to Gem Core side.

Why? There's nothing here to debug, and less to connect it to GEM itself.
Comment 39 Anshuman Gupta 2019-05-16 06:32:22 UTC
Tried local reproduction (we found same pattern that i915_gem_idle_thread hung).
And it all started from stall CPU and possibly due to that kwroker is hunged and system is in panic ( see below INFO: task kworker/2:1:62 blocked for more than 61 seconds.).


[ 1089.154465] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 1089.160267] rcu: 	1-....: (2565 ticks this GP) idle=74a/1/0x4000000000000004 softirq=5362/5363 fqs=9118 
[ 1089.169912] rcu: 	 (t=65015 jiffies g=18549 q=74)
[ 1089.174611] NMI backtrace for cpu 1
[ 1089.178090] CPU: 1 PID: 1337 Comm: i915_pm_rpm Not tainted 5.1.0-test+ #55
[ 1089.184937] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
[ 1089.198178] Call Trace:
[ 1089.200623]  <IRQ>
[ 1089.202638]  dump_stack+0x5e/0x8b
[ 1089.205949]  nmi_cpu_backtrace+0x8b/0x90
[ 1089.209863]  ? lapic_can_unplug_cpu+0xa0/0xa0
[ 1089.214203]  nmi_trigger_cpumask_backtrace+0xba/0x100
[ 1089.219242]  rcu_dump_cpu_stacks+0x94/0xc9
[ 1089.223326]  rcu_sched_clock_irq+0x7c1/0xa60
[ 1089.227584]  ? rcu_read_lock_sched_held+0x74/0x80
[ 1089.232277]  ? tick_sched_handle.isra.6+0x50/0x50
[ 1089.236963]  update_process_times+0x28/0x50
[ 1089.241136]  tick_sched_timer+0x3b/0x80
[ 1089.244961]  __hrtimer_run_queues+0x120/0x4b0
[ 1089.249307]  hrtimer_interrupt+0x12b/0x270
[ 1089.253396]  smp_apic_timer_interrupt+0x85/0x280
[ 1089.257997]  apic_timer_interrupt+0xf/0x20
[ 1089.262079] RIP: 0010:__do_softirq+0xc4/0x488
[ 1089.266422] Code: 89 44 24 08 48 89 44 24 18 48 c7 c7 54 dc 13 82 e8 e1 ed 8f ff 65 66 c7 05 c7 14 43 7e 00 00 e8 f2 44 5a ff fb b8 ff ff ff ff <48> c7 c3 00 51 20 82 41 0f bc c6 83 c0 01 89 44 24 04 0f 84 c8 00
[ 1089.285104] RSP: 0018:ffff8882a7e43f70 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff13
[ 1089.292641] RAX: 00000000ffffffff RBX: ffff8882a34e8040 RCX: 0000000000000001
[ 1089.299745] RDX: 0000000000000002 RSI: ffff8882a34e8908 RDI: ffffffff82130a81
[ 1089.306850] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ 1089.313959] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 1089.321067] R13: 0000000000000000 R14: 0000000000000082 R15: 0000000000000082
[ 1089.328174]  ? apic_timer_interrupt+0xa/0x20
[ 1089.332447]  irq_exit+0xb9/0xd0
[ 1089.335586]  smp_apic_timer_interrupt+0xab/0x280
[ 1089.340189]  apic_timer_interrupt+0xf/0x20
[ 1089.344272]  </IRQ>
[ 1089.346370] RIP: 0010:_raw_spin_unlock_irqrestore+0x53/0x60
[ 1089.351924] Code: c7 02 75 1f 53 9d e8 0c 57 7c ff bf 01 00 00 00 e8 92 49 70 ff 65 8b 05 ab 5d 63 7e 85 c0 74 0c 5b 5d c3 e8 ff 54 7c ff 53 9d <eb> df e8 1b 2a 62 ff 5b 5d c3 0f 1f 00 0f 1f 44 00 00 53 48 8b 54
[ 1089.370600] RSP: 0018:ffffc9000051b928 EFLAGS: 00000293 ORIG_RAX: ffffffffffffff13
[ 1089.378137] RAX: ffff8882a34e8040 RBX: 0000000000000293 RCX: 0000000000000001
[ 1089.385242] RDX: 0000000000000006 RSI: ffff8882a34e8908 RDI: ffffffff82130a81
[ 1089.392346] RBP: ffff88829e6702f0 R08: 000000001e282ec8 R09: 00000000d89164fe
[ 1089.399452] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 1089.406556] R13: ffff88829e6702f0 R14: 0000000000000293 R15: ffff88829e7aaea0
[ 1089.413675]  drm_vblank_get+0x67/0xa0
[ 1089.417380]  intel_pipe_update_start+0xf1/0x410 [i915]
[ 1089.422512]  ? wait_woken+0xb0/0xb0
[ 1089.426027]  intel_update_crtc+0xd2/0x430 [i915]
[ 1089.430668]  skl_update_crtcs+0x1ce/0x250 [i915]
[ 1089.435363]  intel_atomic_commit_tail+0x411/0x1420 [i915]
[ 1089.440754]  ? flush_workqueue+0x27e/0x540
[ 1089.444951]  intel_atomic_commit+0x25a/0x2f0 [i915]
[ 1089.449826]  drm_atomic_helper_set_config+0x80/0x90
[ 1089.454693]  drm_mode_setcrtc+0x3cf/0x630
[ 1089.458701]  ? drm_mode_getcrtc+0x180/0x180
[ 1089.462884]  drm_ioctl_kernel+0xb3/0xf0
[ 1089.466715]  drm_ioctl+0x2eb/0x3a0
[ 1089.470115]  ? drm_mode_getcrtc+0x180/0x180
[ 1089.474293]  do_vfs_ioctl+0xa3/0x6f0
[ 1089.477865]  ? vfs_read+0x12f/0x150
[ 1089.481347]  ksys_ioctl+0x36/0x60
[ 1089.484655]  __x64_sys_ioctl+0x16/0x20
[ 1089.488393]  do_syscall_64+0x5a/0x190
[ 1089.492043]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 1089.497075] RIP: 0033:0x7fb90c9e8f47
[ 1089.500641] Code: 00 00 00 48 8b 05 51 6f 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 21 6f 2c 00 f7 d8 64 89 01 48
[ 1089.519321] RSP: 002b:00007ffc3d668fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 1089.526860] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007fb90c9e8f47
[ 1089.533971] RDX: 00007ffc3d669000 RSI: 00000000c06864a2 RDI: 0000000000000003
[ 1089.541079] RBP: 00007ffc3d669000 R08: 0000000000000000 R09: 0000000000663c84
[ 1089.548184] R10: 0000000001bda4e0 R11: 0000000000000246 R12: 00000000c06864a2
[ 1089.555291] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000000
-------------------------------------------------------------------------------
[ 1109.391504]       Not tainted 5.1.0-test+ #55
[ 1109.402233] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,318)@ 1107.912523 -> 1107.909605 [e 0 us, 0 rep]
[ 1109.402236] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7292, diff=1, hw=5510 hw_last=5509
[ 1109.402295] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,325)@ 1107.912585 -> 1107.909602 [e 3 us, 0 rep]
[ 1109.412825] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1109.423563] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,826)@ 1107.933854 -> 1107.926273 [e 0 us, 0 rep]
[ 1109.423567] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7293, diff=1, hw=5511 hw_last=5510
[ 1109.423622] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,832)@ 1107.933913 -> 1107.926277 [e 3 us, 0 rep]
[ 1109.427934] kworker/2:1     D11168    62      2 0x80000000
[ 1109.438667] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,656)@ 1107.948958 -> 1107.942937 [e 0 us, 0 rep]
[ 1109.438670] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7294, diff=1, hw=5512 hw_last=5511
[ 1109.438729] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,662)@ 1107.949017 -> 1107.942941 [e 3 us, 0 rep]
[ 1109.449305] Workqueue: events i915_hpd_poll_init_work [i915]
[ 1109.459995] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1163)@ 1107.970285 -> 1107.959611 [e 0 us, 0 rep]
[ 1109.459999] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7295, diff=1, hw=5513 hw_last=5512
[ 1109.460050] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1169)@ 1107.970341 -> 1107.959612 [e 0 us, 0 rep]
[ 1109.467822] Call Trace:
[ 1109.478548] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1369)@ 1107.988839 -> 1107.976275 [e 0 us, 0 rep]
[ 1109.478552] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7296, diff=1, hw=5514 hw_last=5513
[ 1109.478607] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1375)@ 1107.988898 -> 1107.976278 [e 4 us, 0 rep]
[ 1109.489142]  ? __schedule+0x44f/0xb00
[ 1109.499872] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,60)@ 1108.010162 -> 1108.009611 [e 0 us, 0 rep]
[ 1109.499875] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7297, diff=2, hw=5516 hw_last=5514
[ 1109.499925] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,66)@ 1108.010216 -> 1108.009610 [e 3 us, 0 rep]
[ 1109.505366]  schedule+0x3c/0x80
[ 1109.516092] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,11)@ 1108.026383 -> 1108.026282 [e 0 us, 0 rep]
[ 1109.516096] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7299, diff=1, hw=5517 hw_last=5516
[ 1109.516148] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,18)@ 1108.026439 -> 1108.026273 [e 0 us, 0 rep]
[ 1109.526681]  schedule_preempt_disabled+0x14/0x20
[ 1109.537414] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,519)@ 1108.047705 -> 1108.042941 [e 0 us, 0 rep]
[ 1109.537417] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7300, diff=1, hw=5518 hw_last=5517
[ 1109.537472] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,525)@ 1108.047761 -> 1108.042943 [e 3 us, 0 rep]
[ 1109.543079]  __mutex_lock+0x298/0x990
[ 1109.553900] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,499)@ 1108.064191 -> 1108.059611 [e 0 us, 0 rep]
[ 1109.553904] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7301, diff=1, hw=5519 hw_last=5518
[ 1109.553951] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,505)@ 1108.064242 -> 1108.059607 [e 3 us, 0 rep]
[ 1109.564522]  ? i915_hpd_poll_init_work+0x3c/0x110 [i915]
[ 1109.575308] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1015)@ 1108.085599 -> 1108.076283 [e 0 us, 0 rep]
[ 1109.575312] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7302, diff=1, hw=5520 hw_last=5519
[ 1109.575368] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1022)@ 1108.085658 -> 1108.076279 [e 3 us, 0 rep]
[ 1109.577816]  ? i915_hpd_poll_init_work+0x3c/0x110 [i915]
[ 1109.588604] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,648)@ 1108.098894 -> 1108.092947 [e 0 us, 0 rep]
[ 1109.599208]  i915_hpd_poll_init_work+0x3c/0x110 [i915]
[ 1109.609999] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1163)@ 1108.120289 -> 1108.109615 [e 0 us, 0 rep]
[ 1109.610002] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7303, diff=2, hw=5522 hw_last=5520
[ 1109.610049] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1169)@ 1108.120340 -> 1108.109611 [e 3 us, 0 rep]
[ 1109.613677]  process_one_work+0x218/0x610
[ 1109.624319] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,907)@ 1108.134610 -> 1108.126286 [e 0 us, 0 rep]
[ 1109.624322] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7305, diff=1, hw=5523 hw_last=5522
[ 1109.624379] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,914)@ 1108.134670 -> 1108.126281 [e 0 us, 0 rep]
[ 1109.634916]  worker_thread+0x4f/0x3e0
[ 1109.645559] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1406)@ 1108.155850 -> 1108.142946 [e 0 us, 0 rep]
[ 1109.645562] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7306, diff=1, hw=5524 hw_last=5523
[ 1109.645619] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1412)@ 1108.155910 -> 1108.142951 [e 3 us, 0 rep]
[ 1109.648726]  kthread+0x103/0x140
[ 1109.659368] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1094)@ 1108.169658 -> 1108.159618 [e 0 us, 0 rep]
[ 1109.659371] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7307, diff=1, hw=5525 hw_last=5524
[ 1109.659427] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1101)@ 1108.169718 -> 1108.159613 [e 0 us, 0 rep]
[ 1109.669960]  ? process_one_work+0x610/0x610
[ 1109.680616] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1593)@ 1108.190907 -> 1108.176287 [e 0 us, 0 rep]
[ 1109.680619] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7308, diff=1, hw=5526 hw_last=5525
[ 1109.680674] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1600)@ 1108.190965 -> 1108.176280 [e 4 us, 0 rep]
[ 1109.685247]  ? kthread_park+0x90/0x90
[ 1109.695984] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1452)@ 1108.206275 -> 1108.192949 [e 0 us, 0 rep]
[ 1109.695988] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7309, diff=1, hw=5527 hw_last=5526
[ 1109.696038] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1458)@ 1108.206328 -> 1108.192947 [e 3 us, 0 rep]
[ 1109.706575]  ret_from_fork+0x24/0x50
[ 1109.717312] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,144)@ 1108.227603 -> 1108.226281 [e 0 us, 0 rep]
[ 1109.717316] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7310, diff=2, hw=5529 hw_last=5527
[ 1109.717370] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,150)@ 1108.227661 -> 1108.226284 [e 3 us, 0 rep]
[ 1109.721012] 
[ 1109.721012] Showing all locks held in the system:
[ 1109.731727] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7312, diff=0, hw=5529 hw_last=5529
[ 1109.731783] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1720)@ 1108.242073 -> 1108.226288 [e 3 us, 0 rep]
[ 1109.742331] 1 lock held by khungtaskd/52:
[ 1109.753065] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,407)@ 1108.263355 -> 1108.259620 [e 0 us, 0 rep]
[ 1109.753069] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7312, diff=2, hw=5531 hw_last=5529
[ 1109.753118] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,413)@ 1108.263409 -> 1108.259618 [e 3 us, 0 rep]
[ 1109.758384]  #0: 000000002cfb3a12 (rcu_read_lock){....}, at: debug_show_all_locks+0xe/0x190
[ 1109.769209] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,350)@ 1108.279499 -> 1108.276287 [e 0 us, 0 rep]
[ 1109.769212] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7314, diff=1, hw=5532 hw_last=5531
[ 1109.769266] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,356)@ 1108.279557 -> 1108.276290 [e 3 us, 0 rep]
[ 1109.779802] 3 locks held by kworker/2:1/62:
[ 1109.790627] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,868)@ 1108.300918 -> 1108.292951 [e 0 us, 0 rep]
[ 1109.790630] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7315, diff=1, hw=5533 hw_last=5532
[ 1109.790687] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,874)@ 1108.300978 -> 1108.292957 [e 3 us, 0 rep]
[ 1109.795947]  #0: 0000000017e11193 ((wq_completion)events){+.+.}, at: process_one_work+0x190/0x610
[ 1109.806687] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,801)@ 1108.316978 -> 1108.309626 [e 0 us, 0 rep]
[ 1109.806690] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7316, diff=1, hw=5534 hw_last=5533
[ 1109.806745] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,808)@ 1108.317036 -> 1108.309620 [e 3 us, 0 rep]
[ 1109.811836]  #1: 00000000a1280dae ((work_completion)(&dev_priv->hotplug.poll_init_work)){+.+.}, at: process_one_work+0x190/0x610
[ 1109.822659] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,726)@ 1108.332950 -> 1108.326287 [e 0 us, 0 rep]
[ 1109.822662] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7317, diff=1, hw=5535 hw_last=5534
[ 1109.822717] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,732)@ 1108.333008 -> 1108.326290 [e 3 us, 0 rep]
[ 1109.833253]  #2: 000000008a932623 (&dev->mode_config.mutex){+.+.}, at: i915_hpd_poll_init_work+0x3c/0x110 [i915]
[ 1109.844075] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1243)@ 1108.354365 -> 1108.342958 [e 0 us, 0 rep]
[ 1109.844079] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7318, diff=1, hw=5536 hw_last=5535
[ 1109.844129] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1249)@ 1108.354420 -> 1108.342957 [e 0 us, 0 rep]
[ 1109.848094] 1 lock held by in:imklog/787:
[ 1109.858832] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1035)@ 1108.369122 -> 1108.359624 [e 0 us, 0 rep]
[ 1109.858835] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7319, diff=1, hw=5537 hw_last=5536
[ 1109.858891] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1041)@ 1108.369182 -> 1108.359628 [e 3 us, 0 rep]
[ 1109.869426]  #0: 00000000d0e8ac94 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0x3f/0x50
[ 1109.880163] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1543)@ 1108.390453 -> 1108.376292 [e 0 us, 0 rep]
[ 1109.880166] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7320, diff=1, hw=5538 hw_last=5537
[ 1109.880221] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1549)@ 1108.390512 -> 1108.376295 [e 3 us, 0 rep]
[ 1109.883841] 2 locks held by bash/1155:
[ 1109.894666] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1307)@ 1108.404957 -> 1108.392962 [e 0 us, 0 rep]
[ 1109.894670] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7321, diff=1, hw=5539 hw_last=5538
[ 1109.894723] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1313)@ 1108.405014 -> 1108.392964 [e 3 us, 0 rep]
[ 1109.905261]  #0: 0000000040d44344 (&tty->ldisc_sem){++++}, at: tty_ldisc_ref_wait+0x24/0x50
[ 1109.916089] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,9)@ 1108.426379 -> 1108.426297 [e 0 us, 0 rep]
[ 1109.916093] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7322, diff=2, hw=5541 hw_last=5539
[ 1109.916143] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,15)@ 1108.426434 -> 1108.426296 [e 0 us, 0 rep]
[ 1109.919333]  #1: 000000006f4a683e (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0xd8/0x950
[ 1109.930154] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7324, diff=0, hw=5541 hw_last=5541
[ 1109.930204] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1547)@ 1108.440494 -> 1108.426297 [e 3 us, 0 rep]
[ 1109.940751] 8 locks held by i915_pm_rpm/1337:
[ 1109.951574] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,244)@ 1108.461865 -> 1108.459626 [e 0 us, 0 rep]
[ 1109.951578] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7324, diff=2, hw=5543 hw_last=5541
[ 1109.951628] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,250)@ 1108.461919 -> 1108.459625 [e 3 us, 0 rep]
[ 1109.955775] 
[ 1109.966597] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,65)@ 1108.476888 -> 1108.476291 [e 0 us, 0 rep]
[ 1109.966600] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7326, diff=1, hw=5544 hw_last=5543
[ 1109.966652] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,71)@ 1108.476943 -> 1108.476292 [e 3 us, 0 rep]
[ 1109.977190] =============================================
[ 1109.977190] 
[ 1109.988014] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,582)@ 1108.498305 -> 1108.492963 [e 0 us, 0 rep]
[ 1109.988018] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7327, diff=1, hw=5545 hw_last=5544
[ 1109.988068] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,588)@ 1108.498359 -> 1108.492963 [e 0 us, 0 rep]
[ 1109.991691] NMI backtrace for cpu 5
[ 1110.002512] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,346)@ 1108.512803 -> 1108.509628 [e 0 us, 0 rep]
[ 1110.002516] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7328, diff=1, hw=5546 hw_last=5545
[ 1110.002572] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,352)@ 1108.512862 -> 1108.509632 [e 4 us, 0 rep]
[ 1110.013112] CPU: 5 PID: 52 Comm: khungtaskd Not tainted 5.1.0-test+ #55
[ 1110.023942] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,865)@ 1108.534233 -> 1108.526294 [e 3 us, 0 rep]
[ 1110.027490] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
[ 1110.038244] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,607)@ 1108.548535 -> 1108.542964 [e 3 us, 0 rep]
[ 1110.048791] Call Trace:
[ 1110.048795]  dump_stack+0x5e/0x8b
[ 1110.059537] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7329, diff=2, hw=5548 hw_last=5546
[ 1110.065696]  nmi_cpu_backtrace+0x8b/0x90
[ 1110.076315] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1123)@ 1108.586606 -> 1108.576299 [e 4 us, 0 rep]
[ 1110.087091]  ? lapic_can_unplug_cpu+0xa0/0xa0
[ 1110.091103] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,918)@ 1108.601394 -> 1108.592969 [e 3 us, 0 rep]
[ 1110.101829]  nmi_trigger_cpumask_backtrace+0xba/0x100
[ 1110.101832]  watchdog+0x362/0x660
[ 1110.112417] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,1424)@ 1108.622708 -> 1108.609639 [e 4 us, 0 rep]
[ 1110.123144]  kthread+0x103/0x140
[ 1110.131462] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7331, diff=4, hw=5552 hw_last=5548
[ 1110.142199]  ? hungtask_pm_notify+0x40/0x40
[ 1110.142201]  ? kthread_park+0x90/0x90
[ 1110.152818] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,378)@ 1108.663109 -> 1108.659640 [e 4 us, 0 rep]
[ 1110.163507]  ret_from_fork+0x24/0x50
[ 1110.167695] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,183)@ 1108.677986 -> 1108.676306 [e 3 us, 0 rep]
[ 1110.178450] Sending NMI from CPU 5 to CPUs 0-4,6-7:
[ 1110.188991] [drm:drm_calc_vbltimestamp_from_scanoutpos] crtc 0 : v p(0,688)@ 1108.699281 -> 1108.692967 [e 0 us, 0 rep]
[ 1110.199785] NMI backtrace for cpu 4 skipped: idling at mwait_idle+0x6c/0x210
[ 1110.199786] NMI backtrace for cpu 7 skipped: idling at mwait_idle+0x6c/0x210
[ 1110.199788] NMI backtrace for cpu 6 skipped: idling at mwait_idle+0x6c/0x210
[ 1110.199789] NMI backtrace for cpu 3 skipped: idling at mwait_idle+0x6c/0x210
[ 1110.199790] NMI backtrace for cpu 2 skipped: idling at mwait_idle+0x6c/0x210
[ 1110.199791] NMI backtrace for cpu 0
[ 1110.199792] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.1.0-test+ #55
[ 1110.199793] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
[ 1110.199793] RIP: 0010:reschedule_interrupt+0x0/0x20
[ 1110.199794] Code: 66 2e 0f 1f 84 00 00 00 00 00 68 03 ff ff ff e8 16 ef ff ff e8 41 0c 00 00 e9 0b f0 ff ff 66 90 66 2e 0f 1f 84 00 00 00 00 00 <68> 02 ff ff ff e8 f6 ee ff ff e8 11 0a 00 00 e9 eb ef ff ff 66 90
[ 1110.199795] RSP: 0018:ffffffff82203e68 EFLAGS: 00000046
[ 1110.199796] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 1110.199797] RDX: 0000000000000000 RSI: 0000000000000006 RDI: ffffffff82130a81
[ 1110.199797] RBP: 0000000000000000 R08: 0000000000000001 R09: 00000132948aa3ca
[ 1110.199798] R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff8221b800
[ 1110.199798] R13: 0000000000000000 R14: 0000000000000000 R15: ffffffff8221b800
[ 1110.199799] FS:  0000000000000000(0000) GS:ffff8882a7e00000(0000) knlGS:0000000000000000
[ 1110.199799] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1110.199799] CR2: 00007f7273457008 CR3: 00000002a3740005 CR4: 0000000000760ef0
[ 1110.199800] PKRU: 55555554
[ 1110.199800] Call Trace:
[ 1110.199800] RIP: 0010:mwait_idle+0x6c/0x210
[ 1110.199801] Code: 31 d2 65 48 8b 04 25 c0 4e 01 00 48 89 d1 0f 01 c8 48 8b 00 a8 08 0f 85 8f 01 00 00 e8 8d 5e 7c ff 31 c0 48 89 c1 fb 0f 01 c9 <e8> 8f 06 b2 ff 89 c5 0f 1f 44 00 00 65 48 8b 04 25 c0 4e 01 00 f0
[ 1110.199801] RSP: 0018:ffffffff82203e90 EFLAGS: 00000246 do_idle+0x1bb/0x260
[ 1110.199802]  cpu_startup_entry+0x19/0x20
[ 1110.199802]  start_kernel+0x4c9/0x4eb
[ 1110.199803]  secondary_startup_64+0xa4/0xb0
[ 1110.200736] NMI backtrace for cpu 1
[ 1110.200737] CPU: 1 PID: 1337 Comm: i915_pm_rpm Not tainted 5.1.0-test+ #55
[ 1110.200737] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
[ 1110.200738] RIP: 0010:io_serial_out+0x16/0x20
[ 1110.200738] Code: 8b 57 38 d3 e6 01 f2 ec 0f b6 c0 c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 0f b6 8f e9 00 00 00 89 d0 8b 57 38 d3 e6 01 f2 ee <c3> 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 0f b6 87 ea 00 00 00
[ 1110.200739] RSP: 0018:ffff8882a7e43970 EFLAGS: 00000006
[ 1110.200740] RAX: 0000000000000000 RBX: ffffffff840b5ae0 RCX: 0000000000000000
[ 1110.200740] RDX: 00000000000003f9 RSI: 0000000000000001 RDI: ffffffff840b5ae0
[ 1110.200741] RBP: 0000000000000001 R08: 00000000c9b730e1 R09: 00000000014d2000
[ 1110.200741] R10: 0000000000000063 R11: 0000000000000000 R12: 0000000000000005
[ 1110.200741] R13: ffffffff8373b520 R14: 0000000000000065 R15: 0000000000000046
[ 1110.200742] FS:  00007fb90ece6a40(0000) GS:ffff8882a7e40000(0000) knlGS:0000000000000000
[ 1110.200742] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1110.200743] CR2: 0000556acaa3a478 CR3: 00000002a3740004 CR4: 0000000000760ee0
[ 1110.200743] PKRU: 55555554
[ 1110.200743] Call Trace:
[ 1110.200744]  <IRQ>
[ 1110.200744]  serial8250_console_write+0x15b/0x2b0
[ 1110.200745]  ? console_unlock+0x182/0x5c0
[ 1110.200745]  console_unlock+0x4fb/0x5c0
[ 1110.200745]  vprintk_emit+0x11b/0x330
[ 1110.200746]  printk+0x52/0x6e
[ 1110.200746]  ? drm_calc_vbltimestamp_from_scanoutpos+0x2cf/0x330
[ 1110.200746]  drm_dbg+0x84/0x90
[ 1110.200747]  drm_calc_vbltimestamp_from_scanoutpos+0x2cf/0x330
[ 1110.200747]  drm_get_last_vbltimestamp+0x4d/0x80
[ 1110.200748]  drm_update_vblank_count+0x89/0x260
[ 1110.200748]  drm_handle_vblank+0x95/0x370
[ 1110.200748]  ? __unclaimed_reg_debug+0x15/0x50 [i915]
[ 1110.200749]  gen8_de_irq_handler+0x26b/0x970 [i915]
[ 1110.200749]  gen11_irq_handler+0x146/0x3a0 [i915]
[ 1110.200750]  __handle_irq_event_percpu+0x5e/0x2e0
[ 1110.200750]  handle_irq_event_percpu+0x30/0x70
[ 1110.200750]  handle_irq_event+0x34/0x60
[ 1110.200751]  handle_edge_irq+0xfc/0x1d0
[ 1110.200751]  handle_irq+0x6c/0x170
[ 1110.200751]  do_IRQ+0x63/0x130
[ 1110.200752]  common_interrupt+0xf/0xf
[ 1110.200752] RIP: 0010:__do_softirq+0xc4/0x488
[ 1110.200753] Code: 89 44 24 08 48 89 44 24 18 48 c7 c7 54 dc 13 82 e8 e1 ed 8f ff 65 66 c7 05 c7 14 43 7e 00 00 e8 f2 44 5a ff fb b8 ff ff ff ff <48> c7 c3 00 51 20 82 41 0f bc c6 83 c0 01 89 44 24 04 0f 84 c8 00
[ 1110.200753] RSP: 0018:ffff8882a7e43f70 EFLAGS: 00000206 ORIG_RAX: ffffffffffffffdc
[ 1110.200754] RAX: 00000000ffffffff RBX: ffff8882a34e8040 RCX: 0000000000000001
[ 1110.200754] RDX: 0000000000000002 RSI: ffff8882a34e8908 RDI: ffffffff82130a81
[ 1110.200755] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ 1110.200755] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 1110.200756] R13: 0000000000000000 R14: 0000000000000082 R15: 0000000000000082
[ 1110.200756]  ? common_interrupt+0xa/0xf
[ 1110.200756]  irq_exit+0xb9/0xd0
[ 1110.200757]  smp_apic_timer_interrupt+0xab/0x280
[ 1110.200757]  apic_timer_interrupt+0xf/0x20
[ 1110.200758]  </IRQ>
[ 1110.200758] RIP: 0010:_raw_spin_unlock_irqrestore+0x53/0x60
[ 1110.200759] Code: c7 02 75 1f 53 9d e8 0c 57 7c ff bf 01 00 00 00 e8 92 49 70 ff 65 8b 05 ab 5d 63 7e 85 c0 74 0c 5b 5d c3 e8 ff 54 7c ff 53 9d <eb> df e8 1b 2a 62 ff 5b 5d c3 0f 1f 00 0f 1f 44 00 00 53 48 8b 54
[ 1110.200759] RSP: 0018:ffffc9000051b928 EFLAGS: 00000293 ORIG_RAX: ffffffffffffff13
[ 1110.200760] RAX: ffff8882a34e8040 RBX: 0000000000000293 RCX: 0000000000000001
[ 1110.200760] RDX: 0000000000000006 RSI: ffff8882a34e8908 RDI: ffffffff82130a81
[ 1110.200761] RBP: ffff88829e6702f0 R08: 000000001e282ec8 R09: 00000000d89164fe
[ 1110.200761] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 1110.200762] R13: ffff88829e6702f0 R14: 0000000000000293 R15: ffff88829e7aaea0
[ 1110.200762]  drm_vblank_get+0x67/0xa0
[ 1110.200762]  intel_pipe_update_start+0xf1/0x410 [i915]
[ 1110.200763]  ? wait_woken+0xb0/0xb0
[ 1110.200763]  intel_update_crtc+0xd2/0x430 [i915]
[ 1110.200763]  skl_update_crtcs+0x1ce/0x250 [i915]
[ 1110.200764]  intel_atomic_commit_tail+0x411/0x1420 [i915]
[ 1110.200764]  ? flush_workqueue+0x27e/0x540
[ 1110.200765]  intel_atomic_commit+0x25a/0x2f0 [i915]
[ 1110.200765]  drm_atomic_helper_set_config+0x80/0x90
[ 1110.200765]  drm_mode_setcrtc+0x3cf/0x630
[ 1110.200766]  ? drm_mode_getcrtc+0x180/0x180
[ 1110.200766]  drm_ioctl_kernel+0xb3/0xf0
[ 1110.200767]  drm_ioctl+0x2eb/0x3a0
[ 1110.200767]  ? drm_mode_getcrtc+0x180/0x180
[ 1110.200767]  do_vfs_ioctl+0xa3/0x6f0
[ 1110.200768]  ? vfs_read+0x12f/0x150
[ 1110.200768]  ksys_ioctl+0x36/0x60
[ 1110.200768]  __x64_sys_ioctl+0x16/0x20
[ 1110.200769]  do_syscall_64+0x5a/0x190
[ 1110.200769]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 1110.200770] RIP: 0033:0x7fb90c9e8f47
[ 1110.200770] Code: 00 00 00 48 8b 05 51 6f 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 21 6f 2c 00 f7 d8 64 89 01 48
[ 1110.200771] RSP: 002b:00007ffc3d668fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 1110.200771] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007fb90c9e8f47
[ 1110.200772] RDX: 00007ffc3d669000 RSI: 00000000c06864a2 RDI: 0000000000000003
[ 1110.200772] RBP: 00007ffc3d669000 R08: 0000000000000000 R09: 0000000000663c84
[ 1110.200773] R10: 0000000001bda4e0 R11: 0000000000000246 R12: 00000000c06864a2
[ 1110.200773] R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000000
[ 1110.200778] Kernel panic - not syncing: hung_task: blocked tasks
[ 1110.208958] [drm:drm_update_vblank_count] updating vblank count on crtc 0: current=7335, diff=5, hw=5557 hw_last=5552
[ 1111.315107] Shutting down cpus with NMI
[ 1111.535592] Dumping ftrace buffer:
[ 1111.539043] ---------------------------------
[ 1111.543432] systemd--304     0.... 15054664us : i915_gem_sanitize: 
[ 1111.549715] systemd--304     0.... 15054698us : intel_gt_sanitize: 
[ 1111.555993] systemd--304     0.... 15054700us : intel_gpu_reset: engine_mask=ff
[ 1111.563307] systemd--304     2.... 19997221us : intel_gt_unpark: 
[ 1111.569413] systemd--304     2.... 19997447us : __engine_unpark: rcs0
[ 1111.575868] systemd--304     2.... 19997639us : __i915_request_commit: rcs0 fence a:2
[ 1111.583701] systemd--304     2d..1 19997702us : process_csb: rcs0 cs-irq head=11, tail=11
[ 1111.591886] systemd--304     2d..1 19997704us : __i915_request_submit: rcs0 fence a:2 -> current 0
[ 1111.600844] systemd--304     2d..1 19997731us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=0.1, fence a:2 (current 0), prio=-8186
[ 1111.613000] systemd--304     2.... 19997748us : __engine_unpark: bcs0
[ 1111.619449]   <idle>-0       7..s1 19997790us : execlists_submission_tasklet: rcs0 awake?=1, active=1
[ 1111.628664]   <idle>-0       7d.s2 19997794us : process_csb: rcs0 cs-irq head=11, tail=1
[ 1111.636753]   <idle>-0       7d.s2 19997796us : process_csb: rcs0 csb[0]: status=0x10000001:0x00000000, active=0x1
[ 1111.647088]   <idle>-0       7d.s2 19997797us : process_csb: rcs0 csb[1]: status=0x10000018:0x00000000, active=0x5
[ 1111.657421]   <idle>-0       7d.s2 19997800us : process_csb: rcs0 out[0]: ctx=0.1, fence a:2 (current 2), prio=-8186
[ 1111.667928]   <idle>-0       7d.s2 19997803us : process_csb: rcs0 completed ctx=0
[ 1111.675412]   <idle>-0       7..s1 19997832us : execlists_submission_tasklet: rcs0 awake?=1, active=0
[ 1111.684623]   <idle>-0       7d.s2 19997832us : process_csb: rcs0 cs-irq head=1, tail=1
[ 1111.692631] systemd--304     2.... 19997842us : __i915_request_commit: bcs0 fence d:2
[ 1111.700459] systemd--304     2d..1 19997850us : process_csb: bcs0 cs-irq head=11, tail=11
[ 1111.708638] systemd--304     2d..1 19997851us : __i915_request_submit: bcs0 fence d:2 -> current 0
[ 1111.717589] systemd--304     2d..1 19997856us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=1610612736.1, fence d:2 (current 0), prio=-8186
[ 1111.730517] systemd--304     2.... 19997860us : __engine_unpark: vcs0
[ 1111.736964]   <idle>-0       7..s1 19997942us : execlists_submission_tasklet: bcs0 awake?=1, active=1
[ 1111.746174]   <idle>-0       7d.s2 19997943us : process_csb: bcs0 cs-irq head=11, tail=1
[ 1111.754262]   <idle>-0       7d.s2 19997943us : process_csb: bcs0 csb[0]: status=0x00000001:0x60000000, active=0x1
[ 1111.764596]   <idle>-0       7d.s2 19997943us : process_csb: bcs0 csb[1]: status=0x00000018:0x60000000, active=0x5
[ 1111.774932]   <idle>-0       7d.s2 19997944us : process_csb: bcs0 out[0]: ctx=1610612736.1, fence d:2 (current 2), prio=-8186
[ 1111.786217]   <idle>-0       7d.s2 19997945us : process_csb: bcs0 completed ctx=1610612736
[ 1111.794485] systemd--304     2.... 19997953us : __i915_request_commit: vcs0 fence 10:2
[ 1111.802401] systemd--304     2d..1 19997960us : process_csb: vcs0 cs-irq head=11, tail=11
[ 1111.810583] systemd--304     2d..1 19997961us : __i915_request_submit: vcs0 fence 10:2 -> current 0
[ 1111.819626] systemd--304     2d..1 19997965us : __execlists_submission_tasklet: vcs0 in[0]:  ctx=536870912.1, fence 10:2 (current 0), prio=-8186
[ 1111.832556] systemd--304     2.... 19997969us : __engine_unpark: vcs2
[ 1111.839006]   <idle>-0       7..s1 19998053us : execlists_submission_tasklet: vcs0 awake?=1, active=1
[ 1111.848218]   <idle>-0       7d.s2 19998053us : process_csb: vcs0 cs-irq head=11, tail=1
[ 1111.856309]   <idle>-0       7d.s2 19998053us : process_csb: vcs0 csb[0]: status=0x00000001:0x20000000, active=0x1
[ 1111.866645]   <idle>-0       7d.s2 19998053us : process_csb: vcs0 csb[1]: status=0x00000018:0x20000000, active=0x5
[ 1111.876978]   <idle>-0       7d.s2 19998054us : process_csb: vcs0 out[0]: ctx=536870912.1, fence 10:2 (current 2), prio=-8186
[ 1111.888261]   <idle>-0       7d.s2 19998055us : process_csb: vcs0 completed ctx=536870912
[ 1111.896443] systemd--304     2.... 19998063us : __i915_request_commit: vcs2 fence 13:2
[ 1111.904359] systemd--304     2d..1 19998070us : process_csb: vcs2 cs-irq head=11, tail=11
[ 1111.912537] systemd--304     2d..1 19998071us : __i915_request_submit: vcs2 fence 13:2 -> current 0
[ 1111.921576] systemd--304     2d..1 19998075us : __execlists_submission_tasklet: vcs2 in[0]:  ctx=537001984.1, fence 13:2 (current 0), prio=-8186
[ 1111.934501] systemd--304     2.... 19998079us : __engine_unpark: vecs0
[ 1111.941036]   <idle>-0       7..s1 19998162us : execlists_submission_tasklet: vcs2 awake?=1, active=1
[ 1111.950246]   <idle>-0       7d.s2 19998162us : process_csb: vcs2 cs-irq head=11, tail=1
[ 1111.958333]   <idle>-0       7d.s2 19998163us : process_csb: vcs2 csb[0]: status=0x00000001:0x20020000, active=0x1
[ 1111.968668]   <idle>-0       7d.s2 19998163us : process_csb: vcs2 csb[1]: status=0x00000018:0x20020000, active=0x5
[ 1111.979000]   <idle>-0       7d.s2 19998163us : process_csb: vcs2 out[0]: ctx=537001984.1, fence 13:2 (current 2), prio=-8186
[ 1111.990285]   <idle>-0       7d.s2 19998164us : process_csb: vcs2 completed ctx=537001984
[ 1111.998469] systemd--304     2.... 19998172us : __i915_request_commit: vecs0 fence 16:2
[ 1112.006469] systemd--304     2d..1 19998179us : process_csb: vecs0 cs-irq head=11, tail=11
[ 1112.014731] systemd--304     2d..1 19998180us : __i915_request_submit: vecs0 fence 16:2 -> current 0
[ 1112.023855] systemd--304     2d..1 19998184us : __execlists_submission_tasklet: vecs0 in[0]:  ctx=1073741824.1, fence 16:2 (current 0), prio=-8186
[ 1112.036952]   <idle>-0       7..s1 19998202us : execlists_submission_tasklet: vecs0 awake?=1, active=1
[ 1112.046247]   <idle>-0       7d.s2 19998203us : process_csb: vecs0 cs-irq head=11, tail=0
[ 1112.054423]   <idle>-0       7d.s2 19998205us : process_csb: vecs0 csb[0]: status=0x00000001:0x40000000, active=0x1
[ 1112.064844]   <idle>-0       7..s1 19998221us : execlists_submission_tasklet: vecs0 awake?=1, active=5
[ 1112.074142]   <idle>-0       7d.s2 19998222us : process_csb: vecs0 cs-irq head=0, tail=1
[ 1112.082231]   <idle>-0       7d.s2 19998222us : process_csb: vecs0 csb[1]: status=0x00000018:0x40000000, active=0x5
[ 1112.092654]   <idle>-0       7d.s2 19998223us : process_csb: vecs0 out[0]: ctx=1073741824.1, fence 16:2 (current 2), prio=-8186
[ 1112.104111]   <idle>-0       7d.s2 19998223us : process_csb: vecs0 completed ctx=1073741824
[ 1112.112465] systemd--304     2.... 19998285us : __i915_request_commit: rcs0 fence a:4
[ 1112.120295] systemd--304     2d..1 19998304us : process_csb: rcs0 cs-irq head=1, tail=1
[ 1112.128303] systemd--304     2d..1 19998304us : __i915_request_submit: rcs0 fence a:4 -> current 2
[ 1112.137254] systemd--304     2d..1 19998311us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=0.1, fence a:4 (current 2), prio=-8186
[ 1112.149406] systemd--304     2.... 19998315us : i915_request_retire_upto: rcs0 fence a:2, current 2
[ 1112.158447] systemd--304     2.... 19998316us : i915_request_retire: rcs0 fence a:2, current 2
[ 1112.167059] systemd--304     2.... 19998317us : i915_request_retire: __retire_engine_request(rcs0) fence a:2, current 2
[ 1112.177827]   <idle>-0       7..s1 19998341us : execlists_submission_tasklet: rcs0 awake?=1, active=1
[ 1112.187036]   <idle>-0       7d.s2 19998342us : process_csb: rcs0 cs-irq head=1, tail=2
[ 1112.195035]   <idle>-0       7d.s2 19998342us : process_csb: rcs0 csb[2]: status=0x10000001:0x00000000, active=0x1
[ 1112.205370]   <idle>-0       7..s1 19998362us : execlists_submission_tasklet: rcs0 awake?=1, active=5
[ 1112.214579]   <idle>-0       7d.s2 19998363us : process_csb: rcs0 cs-irq head=2, tail=3
[ 1112.222579]   <idle>-0       7d.s2 19998363us : process_csb: rcs0 csb[3]: status=0x10000018:0x00000000, active=0x5
[ 1112.232913]   <idle>-0       7d.s2 19998364us : process_csb: rcs0 out[0]: ctx=0.1, fence a:4 (current 4), prio=-8186
[ 1112.243423]   <idle>-0       7d.s2 19998364us : process_csb: rcs0 completed ctx=0
[ 1112.250909] systemd--304     2.... 19998705us : __i915_request_commit: rcs0 fence 19:2
[ 1112.258824] systemd--304     2d..1 19998713us : process_csb: rcs0 cs-irq head=3, tail=3
[ 1112.266829] systemd--304     2d..1 19998714us : __i915_request_submit: rcs0 fence 19:2 -> current 0
[ 1112.275871] systemd--304     2d..1 19998717us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=64.1, fence 19:2 (current 0), prio=6
[ 1112.287847]   <idle>-0       7..s1 19998730us : execlists_submission_tasklet: rcs0 awake?=1, active=1
[ 1112.297057]   <idle>-0       7d.s2 19998731us : process_csb: rcs0 cs-irq head=3, tail=4
[ 1112.305058]   <idle>-0       7d.s2 19998731us : process_csb: rcs0 csb[4]: status=0x10000001:0x00000000, active=0x1
[ 1112.315392]   <idle>-0       7..s1 19998774us : execlists_submission_tasklet: rcs0 awake?=1, active=5
[ 1112.324603]   <idle>-0       7d.s2 19998775us : process_csb: rcs0 cs-irq head=4, tail=5
[ 1112.332607]   <idle>-0       7d.s2 19998775us : process_csb: rcs0 csb[5]: status=0x10000018:0x00000040, active=0x5
[ 1112.342943]   <idle>-0       7d.s2 19998776us : process_csb: rcs0 out[0]: ctx=64.1, fence 19:2 (current 2), prio=6
[ 1112.353279]   <idle>-0       7d.s2 19998777us : process_csb: rcs0 completed ctx=64
[ 1112.360853] systemd--304     2.... 19998839us : __i915_request_commit: bcs0 fence 1a:2
[ 1112.368773] systemd--304     2d..1 19998843us : process_csb: bcs0 cs-irq head=1, tail=1
[ 1112.376779] systemd--304     2d..1 19998844us : __i915_request_submit: bcs0 fence 1a:2 -> current 0
[ 1112.385819] systemd--304     2d..1 19998845us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=1610612800.1, fence 1a:2 (current 0), prio=6
[ 1112.398486]   <idle>-0       7..s1 19998858us : execlists_submission_tasklet: bcs0 awake?=1, active=1
[ 1112.407699]   <idle>-0       7d.s2 19998859us : process_csb: bcs0 cs-irq head=1, tail=3
[ 1112.415702]   <idle>-0       7d.s2 19998859us : process_csb: bcs0 csb[2]: status=0x00000001:0x60000000, active=0x1
[ 1112.426039]   <idle>-0       7d.s2 19998859us : process_csb: bcs0 csb[3]: status=0x00000018:0x60000040, active=0x5
[ 1112.436374]   <idle>-0       7d.s2 19998860us : process_csb: bcs0 out[0]: ctx=1610612800.1, fence 1a:2 (current 2), prio=6
[ 1112.447402]   <idle>-0       7d.s2 19998861us : process_csb: bcs0 completed ctx=1610612800
[ 1112.455661]   <idle>-0       7..s1 19998867us : execlists_submission_tasklet: bcs0 awake?=1, active=0
[ 1112.464870]   <idle>-0       7d.s2 19998868us : process_csb: bcs0 cs-irq head=3, tail=3
[ 1112.472875] systemd--304     2.... 19998964us : __i915_request_commit: vcs0 fence 1b:2
[ 1112.480791] systemd--304     2d..1 19998969us : process_csb: vcs0 cs-irq head=1, tail=1
[ 1112.488796] systemd--304     2d..1 19998969us : __i915_request_submit: vcs0 fence 1b:2 -> current 0
[ 1112.497835] systemd--304     2d..1 19998970us : __execlists_submission_tasklet: vcs0 in[0]:  ctx=536870976.1, fence 1b:2 (current 0), prio=6
[ 1112.510414]   <idle>-0       7..s1 19998990us : execlists_submission_tasklet: vcs0 awake?=1, active=1
[ 1112.519622]   <idle>-0       7d.s2 19998993us : process_csb: vcs0 cs-irq head=1, tail=2
[ 1112.527619]   <idle>-0       7d.s2 19998993us : process_csb: vcs0 csb[2]: status=0x00000001:0x20000000, active=0x1
[ 1112.537952]   <idle>-0       7..s1 19999006us : execlists_submission_tasklet: vcs0 awake?=1, active=5
[ 1112.547161]   <idle>-0       7d.s2 19999008us : process_csb: vcs0 cs-irq head=2, tail=3
[ 1112.555159]   <idle>-0       7d.s2 19999009us : process_csb: vcs0 csb[3]: status=0x00000018:0x20000040, active=0x5
[ 1112.565493]   <idle>-0       7d.s2 19999010us : process_csb: vcs0 out[0]: ctx=536870976.1, fence 1b:2 (current 2), prio=6
[ 1112.576432]   <idle>-0       7d.s2 19999010us : process_csb: vcs0 completed ctx=536870976
[ 1112.584613] systemd--304     2.... 19999099us : __i915_request_commit: vcs2 fence 1c:2
[ 1112.592529] systemd--304     2d..1 19999103us : process_csb: vcs2 cs-irq head=1, tail=1
[ 1112.600533] systemd--304     2d..1 19999104us : __i915_request_submit: vcs2 fence 1c:2 -> current 0
[ 1112.609570] systemd--304     2d..1 19999105us : __execlists_submission_tasklet: vcs2 in[0]:  ctx=537002048.1, fence 1c:2 (current 0), prio=6
[ 1112.622151]   <idle>-0       7..s1 19999119us : execlists_submission_tasklet: vcs2 awake?=1, active=1
[ 1112.631360]   <idle>-0       7d.s2 19999119us : process_csb: vcs2 cs-irq head=1, tail=3
[ 1112.639362]   <idle>-0       7d.s2 19999120us : process_csb: vcs2 csb[2]: status=0x00000001:0x20020000, active=0x1
[ 1112.649696]   <idle>-0       7d.s2 19999120us : process_csb: vcs2 csb[3]: status=0x00000018:0x20020040, active=0x5
[ 1112.660030]   <idle>-0       7d.s2 19999121us : process_csb: vcs2 out[0]: ctx=537002048.1, fence 1c:2 (current 2), prio=6
[ 1112.670970]   <idle>-0       7d.s2 19999121us : process_csb: vcs2 completed ctx=537002048
[ 1112.679143]   <idle>-0       7..s1 19999125us : execlists_submission_tasklet: vcs2 awake?=1, active=0
[ 1112.688353]   <idle>-0       7d.s2 19999125us : process_csb: vcs2 cs-irq head=3, tail=3
[ 1112.696360] systemd--304     2.... 19999214us : __i915_request_commit: vecs0 fence 1d:2
[ 1112.704361] systemd--304     2d..1 19999218us : process_csb: vecs0 cs-irq head=1, tail=1
[ 1112.712452] systemd--304     2d..1 19999218us : __i915_request_submit: vecs0 fence 1d:2 -> current 0
[ 1112.721577] systemd--304     2d..1 19999220us : __execlists_submission_tasklet: vecs0 in[0]:  ctx=1073741888.1, fence 1d:2 (current 0), prio=6
[ 1112.734336] systemd--304     2.... 19999222us : i915_gem_wait_for_idle: flags=12 (locked), timeout=200, awake?=yes
[ 1112.744670]   <idle>-0       7..s1 19999233us : execlists_submission_tasklet: vecs0 awake?=1, active=1
[ 1112.753966]   <idle>-0       7d.s2 19999234us : process_csb: vecs0 cs-irq head=1, tail=3
[ 1112.762053]   <idle>-0       7d.s2 19999234us : process_csb: vecs0 csb[2]: status=0x00000001:0x40000000, active=0x1
[ 1112.772472]   <idle>-0       7d.s2 19999234us : process_csb: vecs0 csb[3]: status=0x00000018:0x40000040, active=0x5
[ 1112.782889]   <idle>-0       7d.s2 19999235us : process_csb: vecs0 out[0]: ctx=1073741888.1, fence 1d:2 (current 2), prio=6
[ 1112.794005]   <idle>-0       7d.s2 19999235us : process_csb: vecs0 completed ctx=1073741888
[ 1112.802352]   <idle>-0       7..s1 19999239us : execlists_submission_tasklet: vecs0 awake?=1, active=0
[ 1112.811649]   <idle>-0       7d.s2 19999239us : process_csb: vecs0 cs-irq head=3, tail=3
[ 1112.819746] systemd--304     2.... 19999374us : i915_request_retire: vecs0 fence 1d:2, current 2
[ 1112.828530] systemd--304     2.... 19999378us : i915_request_retire: __retire_engine_request(vecs0) fence 16:2, current 2
[ 1112.839473] systemd--304     2.... 19999382us : i915_request_retire: __retire_engine_request(vecs0) fence 1d:2, current 2
[ 1112.850415] systemd--304     2.... 19999394us : i915_request_retire: vcs2 fence 1c:2, current 2
[ 1112.859112] systemd--304     2.... 19999394us : i915_request_retire: __retire_engine_request(vcs2) fence 13:2, current 2
[ 1112.869970] systemd--304     2.... 19999395us : i915_request_retire: __retire_engine_request(vcs2) fence 1c:2, current 2
[ 1112.880828] systemd--304     2.... 19999401us : i915_request_retire: vcs0 fence 1b:2, current 2
[ 1112.889521] systemd--304     2.... 19999401us : i915_request_retire: __retire_engine_request(vcs0) fence 10:2, current 2
[ 1112.900376] systemd--304     2.... 19999402us : i915_request_retire: __retire_engine_request(vcs0) fence 1b:2, current 2
[ 1112.911232] systemd--304     2.... 19999407us : i915_request_retire: bcs0 fence 1a:2, current 2
[ 1112.919927] systemd--304     2.... 19999408us : i915_request_retire: __retire_engine_request(bcs0) fence d:2, current 2
[ 1112.930695] systemd--304     2.... 19999409us : i915_request_retire: __retire_engine_request(bcs0) fence 1a:2, current 2
[ 1112.941549] systemd--304     2.... 19999414us : i915_request_retire: rcs0 fence 19:2, current 2
[ 1112.950248] systemd--304     2.... 19999414us : i915_request_retire: __retire_engine_request(rcs0) fence a:4, current 4
[ 1112.961016] systemd--304     2.... 19999416us : i915_request_retire: __retire_engine_request(rcs0) fence 19:2, current 2
[ 1112.971870] systemd--304     2.... 19999421us : i915_request_retire: vecs0 fence 16:2, current 2
[ 1112.980654] systemd--304     2.... 19999460us : __i915_request_commit: vecs0 fence 16:4
[ 1112.988656] systemd--304     2d..1 19999473us : process_csb: vecs0 cs-irq head=3, tail=3
[ 1112.996747] systemd--304     2d..1 19999474us : __i915_request_submit: vecs0 fence 16:4 -> current 2
[ 1113.005870] systemd--304     2d..1 19999481us : __execlists_submission_tasklet: vecs0 in[0]:  ctx=1073741824.1, fence 16:4 (current 2), prio=-8186
[ 1113.018969] systemd--304     2.... 19999489us : i915_request_retire: vcs2 fence 13:2, current 2
[ 1113.027664]   <idle>-0       7..s1 19999493us : execlists_submission_tasklet: vecs0 awake?=1, active=1
[ 1113.036965]   <idle>-0       7d.s2 19999494us : process_csb: vecs0 cs-irq head=3, tail=5
[ 1113.045051]   <idle>-0       7d.s2 19999494us : process_csb: vecs0 csb[4]: status=0x00000001:0x40000000, active=0x1
[ 1113.055468]   <idle>-0       7d.s2 19999494us : process_csb: vecs0 csb[5]: status=0x00000018:0x40000000, active=0x5
[ 1113.065895] systemd--304     2.... 19999494us : __i915_request_commit: vcs2 fence 13:4
[ 1113.073810]   <idle>-0       7d.s2 19999495us : process_csb: vecs0 out[0]: ctx=1073741824.1, fence 16:4 (current 4), prio=-8186
[ 1113.085269]   <idle>-0       7d.s2 19999495us : process_csb: vecs0 completed ctx=1073741824
[ 1113.093618]   <idle>-0       7..s1 19999498us : execlists_submission_tasklet: vecs0 awake?=1, active=0
[ 1113.102915]   <idle>-0       7d.s2 19999498us : process_csb: vecs0 cs-irq head=5, tail=5
[ 1113.111000] systemd--304     2d..1 19999503us : process_csb: vcs2 cs-irq head=3, tail=3
[ 1113.119005] systemd--304     2d..1 19999503us : __i915_request_submit: vcs2 fence 13:4 -> current 2
[ 1113.128045] systemd--304     2d..1 19999508us : __execlists_submission_tasklet: vcs2 in[0]:  ctx=537001984.1, fence 13:4 (current 2), prio=-8186
[ 1113.140978] systemd--304     2.... 19999514us : i915_request_retire: vcs0 fence 10:2, current 2
[ 1113.149674] systemd--304     2.... 19999519us : __i915_request_commit: vcs0 fence 10:4
[ 1113.157587]   <idle>-0       7..s1 19999519us : execlists_submission_tasklet: vcs2 awake?=1, active=1
[ 1113.166797]   <idle>-0       7d.s2 19999520us : process_csb: vcs2 cs-irq head=3, tail=5
[ 1113.174799]   <idle>-0       7d.s2 19999520us : process_csb: vcs2 csb[4]: status=0x00000001:0x20020000, active=0x1
[ 1113.185133]   <idle>-0       7d.s2 19999520us : process_csb: vcs2 csb[5]: status=0x00000018:0x20020000, active=0x5
[ 1113.195466]   <idle>-0       7d.s2 19999521us : process_csb: vcs2 out[0]: ctx=537001984.1, fence 13:4 (current 4), prio=-8186
[ 1113.206748]   <idle>-0       7d.s2 19999521us : process_csb: vcs2 completed ctx=537001984
[ 1113.214921]   <idle>-0       7..s1 19999524us : execlists_submission_tasklet: vcs2 awake?=1, active=0
[ 1113.224130]   <idle>-0       7d.s2 19999524us : process_csb: vcs2 cs-irq head=5, tail=5
[ 1113.232130] systemd--304     2d..1 19999527us : process_csb: vcs0 cs-irq head=3, tail=3
[ 1113.240137] systemd--304     2d..1 19999527us : __i915_request_submit: vcs0 fence 10:4 -> current 2
[ 1113.249177] systemd--304     2d..1 19999532us : __execlists_submission_tasklet: vcs0 in[0]:  ctx=536870912.1, fence 10:4 (current 2), prio=-8186
[ 1113.262110] systemd--304     2.... 19999537us : i915_request_retire: bcs0 fence d:2, current 2
[ 1113.270725] systemd--304     2.... 19999542us : __i915_request_commit: bcs0 fence d:4
[ 1113.278559]   <idle>-0       7..s1 19999544us : execlists_submission_tasklet: vcs0 awake?=1, active=1
[ 1113.287773]   <idle>-0       7d.s2 19999544us : process_csb: vcs0 cs-irq head=3, tail=5
[ 1113.295777]   <idle>-0       7d.s2 19999545us : process_csb: vcs0 csb[4]: status=0x00000001:0x20000000, active=0x1
[ 1113.306112]   <idle>-0       7d.s2 19999545us : process_csb: vcs0 csb[5]: status=0x00000018:0x20000000, active=0x5
[ 1113.316450]   <idle>-0       7d.s2 19999545us : process_csb: vcs0 out[0]: ctx=536870912.1, fence 10:4 (current 4), prio=-8186
[ 1113.327738]   <idle>-0       7d.s2 19999546us : process_csb: vcs0 completed ctx=536870912
[ 1113.335915]   <idle>-0       7..s1 19999549us : execlists_submission_tasklet: vcs0 awake?=1, active=0
[ 1113.345128]   <idle>-0       7d.s2 19999549us : process_csb: vcs0 cs-irq head=5, tail=5
[ 1113.353131] systemd--304     2d..1 19999550us : process_csb: bcs0 cs-irq head=3, tail=3
[ 1113.361138] systemd--304     2d..1 19999551us : __i915_request_submit: bcs0 fence d:4 -> current 2
[ 1113.370093] systemd--304     2d..1 19999556us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=1610612736.1, fence d:4 (current 2), prio=-8186
[ 1113.383022]   <idle>-0       7..s1 19999567us : execlists_submission_tasklet: bcs0 awake?=1, active=1
[ 1113.392235]   <idle>-0       7d.s2 19999568us : process_csb: bcs0 cs-irq head=3, tail=5
[ 1113.400238]   <idle>-0       7d.s2 19999568us : process_csb: bcs0 csb[4]: status=0x00000001:0x60000000, active=0x1
[ 1113.410574]   <idle>-0       7d.s2 19999568us : process_csb: bcs0 csb[5]: status=0x00000018:0x60000000, active=0x5
[ 1113.420911]   <idle>-0       7d.s2 19999569us : process_csb: bcs0 out[0]: ctx=1610612736.1, fence d:4 (current 4), prio=-8186
[ 1113.432197]   <idle>-0       7d.s2 19999569us : process_csb: bcs0 completed ctx=1610612736
[ 1113.440466] systemd--304     2.... 19999572us : i915_request_retire: rcs0 fence a:4, current 4
[ 1113.449073]   <idle>-0       7..s1 19999572us : execlists_submission_tasklet: bcs0 awake?=1, active=0
[ 1113.458286]   <idle>-0       7d.s2 19999572us : process_csb: bcs0 cs-irq head=5, tail=5
[ 1113.466292] systemd--304     2.... 19999577us : __i915_request_commit: rcs0 fence a:6
[ 1113.474121] systemd--304     2d..1 19999585us : process_csb: rcs0 cs-irq head=5, tail=5
[ 1113.482127] systemd--304     2d..1 19999585us : __i915_request_submit: rcs0 fence a:6 -> current 4
[ 1113.491082] systemd--304     2d..1 19999590us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=0.1, fence a:6 (current 4), prio=-8186
[ 1113.503235] systemd--304     2.... 19999596us : i915_request_retire: bcs0 fence d:4, current 4
[ 1113.511847] systemd--304     2.... 19999596us : i915_request_retire: __retire_engine_request(bcs0) fence d:4, current 4
[ 1113.522612]   <idle>-0       7..s1 19999608us : execlists_submission_tasklet: rcs0 awake?=1, active=1
[ 1113.531820]   <idle>-0       7d.s2 19999608us : process_csb: rcs0 cs-irq head=5, tail=6
[ 1113.539820]   <idle>-0       7d.s2 19999609us : process_csb: rcs0 csb[6]: status=0x10000001:0x00000000, active=0x1
[ 1113.550154]   <idle>-0       7..s1 19999616us : execlists_submission_tasklet: rcs0 awake?=1, active=5
[ 1113.559367]   <idle>-0       7d.s2 19999617us : process_csb: rcs0 cs-irq head=6, tail=7
[ 1113.567370]   <idle>-0       7d.s2 19999617us : process_csb: rcs0 csb[7]: status=0x10000018:0x00000000, active=0x5
[ 1113.577708]   <idle>-0       7d.s2 19999618us : process_csb: rcs0 out[0]: ctx=0.1, fence a:6 (current 6), prio=-8186
[ 1113.588216]   <idle>-0       7d.s2 19999618us : process_csb: rcs0 completed ctx=0
[ 1113.595702] systemd--304     2.... 19999642us : __engine_park: bcs0
[ 1113.601981] systemd--304     2.... 19999670us : i915_request_retire: vcs0 fence 10:4, current 4
[ 1113.610681] systemd--304     2.... 19999670us : i915_request_retire: __retire_engine_request(vcs0) fence 10:4, current 4
[ 1113.621535] systemd--304     2.... 19999677us : __engine_park: vcs0
[ 1113.627813] systemd--304     2.... 19999689us : i915_request_retire: vcs2 fence 13:4, current 4
[ 1113.636507] systemd--304     2.... 19999690us : i915_request_retire: __retire_engine_request(vcs2) fence 13:4, current 4
[ 1113.647362] systemd--304     2.... 19999696us : __engine_park: vcs2
[ 1113.653639] systemd--304     2.... 19999708us : i915_request_retire: vecs0 fence 16:4, current 4
[ 1113.662422] systemd--304     2.... 19999709us : i915_request_retire: __retire_engine_request(vecs0) fence 16:4, current 4
[ 1113.673360] systemd--304     2.... 19999714us : __engine_park: vecs0
[ 1113.679723] systemd--304     2.... 19999726us : i915_gem_wait_for_idle: flags=12 (locked), timeout=200, awake?=yes
[ 1113.690060] systemd--304     2.... 19999737us : i915_request_retire: rcs0 fence a:6, current 6
[ 1113.698665] systemd--304     2.... 19999738us : i915_request_retire: __retire_engine_request(rcs0) fence a:6, current 6
[ 1113.709429] systemd--304     2.... 19999744us : __engine_park: rcs0
[ 1113.715705] systemd--304     2.... 19999769us : intel_gt_park: 
[ 1113.721612] ---------------------------------
[ 1113.725952] Kernel Offset: disabled
[ 1113.729428] CPU: 5 PID: 52 Comm: khungtaskd Not tainted 5.1.0-test+ #55
[ 1113.736016] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
[ 1113.749254] Call Trace:
[ 1113.751699]  dump_stack+0x5e/0x8b
[ 1113.755005]  panic+0x130/0x2a8
[ 1113.758052]  ? nmi_trigger_cpumask_backtrace+0x96/0x100
[ 1113.763260]  watchdog+0x36e/0x660
[ 1113.766568]  kthread+0x103/0x140
[ 1113.769785]  ? hungtask_pm_notify+0x40/0x40
[ 1113.773956]  ? kthread_park+0x90/0x90
[ 1113.777608]  ret_from_fork+0x24/0x50
[ 1114.161221] ---[ end Kernel panic - not syncing: hung_task: blocked tasks ]---
[ 1114.168449] ------------[ cut here ]------------
[ 1114.173057] sched: Unexpected reschedule of offline CPU#0!
[ 1114.178529] WARNING: CPU: 5 PID: 52 at /home/anshuma1/drm-intel/drm-tip/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x39/0x40
[ 1114.190569] Modules linked in: snd_hda_codec_hdmi x86_pkg_temp_thermal snd_hda_codec_realtek coretemp snd_hda_codec_generic snd_hda_intel snd_hda_codec crct10dif_pclmul snd_hwdep crc32_pclmul mei_hdcp snd_hda_core mei_me ghash_clmulni_intel snd_pcm mei ptp pps_core i915 prime_numbers [last unloaded: e1000e]
[ 1114.217648] CPU: 5 PID: 52 Comm: khungtaskd Not tainted 5.1.0-test+ #55
[ 1114.224236] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3121.A00.1903190527 03/19/2019
[ 1114.237476] RIP: 0010:native_smp_send_reschedule+0x39/0x40
[ 1114.242941] Code: 0f 92 c0 84 c0 74 15 48 8b 05 f3 c0 11 01 be fd 00 00 00 48 8b 40 30 e9 15 17 bc 00 89 fe 48 c7 c7 68 b6 07 82 e8 47 2d 07 00 <0f> 0b c3 0f 1f 40 00 0f 1f 44 00 00 8b 05 15 06 85 01 85 c0 75 30
[ 1114.261622] RSP: 0018:ffff8882a7f43ef0 EFLAGS: 00010082
[ 1114.266826] RAX: 0000000000000000 RBX: ffff8882a5ca0040 RCX: 0000000000000001
[ 1114.273928] RDX: 0000000000000007 RSI: 0000000000000001 RDI: 0000000000000001
[ 1114.281036] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ 1114.288144] R10: 0000000000000000 R11: 0000000000000000 R12: 0000010310d1e50d
[ 1114.295248] R13: ffffffff8115e020 R14: 0000000000000002 R15: 0000000000000001
[ 1114.302356] FS:  0000000000000000(0000) GS:ffff8882a7f40000(0000) knlGS:0000000000000000
[ 1114.310415] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1114.316137] CR2: 00007f7272c92000 CR3: 00000002a111e002 CR4: 0000000000760ee0
[ 1114.323246] PKRU: 55555554
[ 1114.325943] Call Trace:
[ 1114.328382]  <IRQ>
[ 1114.330393]  update_process_times+0x40/0x50
[ 1114.334565]  tick_sched_timer+0x3b/0x80
[ 1114.338389]  __hrtimer_run_queues+0x120/0x4b0
[ 1114.342735]  hrtimer_interrupt+0x12b/0x270
[ 1114.346820]  ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 1114.351508]  smp_apic_timer_interrupt+0x85/0x280
[ 1114.356113]  apic_timer_interrupt+0xf/0x20
[ 1114.360199]  </IRQ>
[ 1114.362295] RIP: 0010:panic+0x25a/0x2a8
[ 1114.366117] Code: a6 83 3d 60 c2 7d 01 00 74 05 e8 d9 9e 02 00 48 c7 c6 60 f1 88 82 48 c7 c7 28 26 08 82 e8 f4 ac 07 00 e8 c1 16 0f 00 fb 31 db <e8> e9 08 0d 00 4c 39 eb 7c 1d 41 83 f4 01 48 8b 05 19 c2 7d 01 44
[ 1114.384802] RSP: 0018:ffffc9000027be40 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
[ 1114.392343] RAX: 0000000080000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 1114.399449] RDX: 0000000000000000 RSI: ffffffff8112d32f RDI: ffffffff810b2eef
[ 1114.406553] RBP: ffffc9000027beb8 R08: 0000000000000000 R09: 0000000000000000
[ 1114.413660] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 1114.420767] R13: 0000000000000000 R14: 00000000003fff63 R15: ffffffff8221bbd8
[ 1114.427876]  ? wake_up_klogd+0x4f/0x70
[ 1114.431618]  ? panic+0x257/0x2a8
[ 1114.434838]  ? nmi_trigger_cpumask_backtrace+0x96/0x100
[ 1114.440045]  watchdog+0x36e/0x660
[ 1114.443352]  kthread+0x103/0x140
[ 1114.446569]  ? hungtask_pm_notify+0x40/0x40
[ 1114.450739]  ? kthread_park+0x90/0x90
[ 1114.454389]  ret_from_fork+0x24/0x50
[ 1114.457955] irq event stamp: 1184
[ 1114.461260] hardirqs last  enabled at (1183): [<ffffffff819df0b1>] _raw_spin_unlock_irqrestore+0x51/0x60
[ 1114.470699] hardirqs last disabled at (1184): [<ffffffff810b2ce7>] panic+0x4f/0x2a8
[ 1114.478325] softirqs last  enabled at (1150): [<ffffffff81c00239>] __do_softirq+0x239/0x488
[ 1114.486641] softirqs last disabled at (1143): [<ffffffff810ba639>] irq_exit+0xb9/0xd0
[ 1114.494441] WARNING: CPU: 5 PID: 52 at /home/anshuma1/drm-intel/drm-tip/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x39/0x40
[ 1114.506471] ---[ end trace bfd39e23785aebc5 ]---



This is crash is reproduced with acpi idle driver disabled, we may need to relook at Bug Assesment.
Comment 40 Jani Nikula 2019-05-16 13:03:45 UTC
(In reply to Anshuman Gupta from comment #39)
> Tried local reproduction (we found same pattern that i915_gem_idle_thread
> hung).
> And it all started from stall CPU and possibly due to that kwroker is hunged
> and system is in panic ( see below INFO: task kworker/2:1:62 blocked for
> more than 61 seconds.).
> 
> 
> [ 1089.154465] rcu: INFO: rcu_preempt self-detected stall on CPU
...
> [ 1114.506471] ---[ end trace bfd39e23785aebc5 ]---

Please *attach* long logs, half the bug is now this one comment!

> This is crash is reproduced with acpi idle driver disabled, we may need to
> relook at Bug Assesment.

Jani S. asked me to drop 99b69db57544 ("ICL HACK: Disable ACPI idle driver") from topic/core-for-CI. But there seems to be some conflicting views here.

Chris?
Comment 41 Chris Wilson 2019-05-21 12:08:56 UTC
(In reply to Jani Nikula from comment #40)
> Chris?

It's getting stuck inside the vblank interrupt handling code. So we are not switching off hw correctly for rpm, or not waking up correctly?
Comment 42 Ville Syrjala 2019-05-21 12:15:24 UTC
I'm not sure what's going on here, but at least the log above is not particularly useful since it has vblank debugs on (meaning a lot of printks) and it seems to also have serial console enabled (which is slow). That particular combination is not a great idea.
Comment 43 Anshuman Gupta 2019-05-23 13:49:49 UTC
http://gfx-ci.fi.intel.com/cibuglog-ng/issue/929
CI bug log indicate new failures of this bug but most of time there are no pstore logs. 
This makes to suspect if it could be e1000e hang which i mentioned earlier in comment 23, we see a hang in e1000e module on our DUT after running i915_pm_rpm hibernation and system-supend test.
 Due to e1000e hang we lost network ssh console but DUT was actually running, it was not a crash but DUR stop responding on network.

Can we try running i915_pm_rpm test after blacklisting e1000e module?
Comment 44 Martin Peres 2019-05-24 07:44:27 UTC
(In reply to Anshuman Gupta from comment #43)
> http://gfx-ci.fi.intel.com/cibuglog-ng/issue/929
> CI bug log indicate new failures of this bug but most of time there are no
> pstore logs. 
> This makes to suspect if it could be e1000e hang which i mentioned earlier
> in comment 23, we see a hang in e1000e module on our DUT after running
> i915_pm_rpm hibernation and system-supend test.
>  Due to e1000e hang we lost network ssh console but DUT was actually
> running, it was not a crash but DUR stop responding on network.
> 
> Can we try running i915_pm_rpm test after blacklisting e1000e module?

We cannot blacklist the module of our network card :D But I guess we'll need to try using some USB network cards and see if this decreases the reproduction rate.

Tomi will be servicing the shards next week, we'll make sure he uses the USB adapters as much as possible!
Comment 45 Jani Saarinen 2019-05-24 07:49:05 UTC
Actually, aren't we using always USB network dongles on ICL? I think we are.
Comment 46 Tomi Sarvela 2019-05-24 07:53:10 UTC
Yes.

RVPs generally have e1000e with unflashable identical MAC addresses 88:87:... so it's least hassle to use USB dongles for them. This also helps with the HW issues of e1000e.

e1000e driver doesn't need to be loaded on RVPs, but no extra blacklisting is done either to keep the "user" defaults.
Comment 47 Jani Saarinen 2019-06-12 14:00:18 UTC
Dropping priority as developer cannot reproduce.
Comment 48 Juha-Pekka Heikkilä 2019-06-18 09:59:08 UTC
Created attachment 144585 [details]
pstore panic on ext4

Pstore coming from here https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6277/shard-iclb2/igt@i915_pm_rpm@universal-planes-dpms.html

It ends with problems on ext4:

<0>[  322.774197] ---------------------------------
<0>[  322.774198] Kernel Offset: disabled
<4>[  322.774201] CPU: 3 PID: 491 Comm: rs:main Q:Reg Tainted: G     U            5.2.0-rc4-CI-CI_DRM_6277+ #1
<4>[  322.774202] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3183.A00.1905020411 05/02/2019
<4>[  322.774202] Call Trace:
<4>[  322.774207]  dump_stack+0x67/0x9b
<4>[  322.774210]  panic+0x12b/0x2cd
<4>[  322.774216]  __ext4_abort+0x1ac/0x1c0
<4>[  322.774223]  ? ext4_dirty_inode+0x28/0x60
<4>[  322.774225]  ext4_journal_check_start+0x75/0x90
<4>[  322.774227]  __ext4_journal_start_sb+0x46/0x1a0
<4>[  322.774230]  ext4_dirty_inode+0x28/0x60
<4>[  322.774232]  __mark_inode_dirty+0x6a/0x5f0
<4>[  322.774236]  generic_update_time+0x97/0xc0
<4>[  322.774238]  file_update_time+0xdb/0x120
<4>[  322.774243]  __generic_file_write_iter+0x94/0x190
<4>[  322.774246]  ext4_file_write_iter+0x149/0x390
<4>[  322.774251]  new_sync_write+0x10d/0x1b0
<4>[  322.774257]  vfs_write+0xbd/0x1b0
<4>[  322.774259]  ksys_write+0x8f/0xe0
<4>[  322.774263]  do_syscall_64+0x55/0x1c0
<4>[  322.774266]  entry_SYSCALL_64_after_hwframe+0x49/0xbe

Maybe this problem is about power management?
Comment 49 Martin Peres 2019-11-29 18:00:18 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/drm/intel/issues/189.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.