Bug 108972 - [CI][DRMTIP] [SKL only] igt@* - incomplete - random tests
Summary: [CI][DRMTIP] [SKL only] igt@* - incomplete - random tests
Status: RESOLVED MOVED
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: high normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
Depends on:
Blocks:
 
Reported: 2018-12-07 17:49 UTC by Lakshmi
Modified: 2019-11-29 18:01 UTC (History)
1 user (show)

See Also:
i915 platform: SKL
i915 features:


Attachments

Comment 1 Lakshmi 2018-12-07 17:50:40 UTC
pstore - panic logs

<0>[  101.099541] gem_exec-1139    7.... 69390932us : i915_request_retire: marking gem_exec_schedu[1139]/3 as inactive
<0>[  101.100847] gem_exec-1139    7.... 69390937us : i915_request_retire: __retire_engine_request(rcs0) fence 1d:5, global=8, current 9
<0>[  101.102163] gem_exec-1139    7.... 69390939us : i915_request_retire: __retire_engine_request(rcs0) fence 21:4, global=9, current 9
<0>[  101.103491]   <idle>-0       6..s1 69390985us : execlists_submission_tasklet: rcs0 awake?=1, active=5
<0>[  101.104808]   <idle>-0       6d.s2 69390986us : process_csb: rcs0 cs-irq head=3, tail=3
<0>[  101.106119]   <idle>-0       6..s1 69391047us : execlists_submission_tasklet: rcs0 awake?=1, active=5
<0>[  101.107448]   <idle>-0       6d.s2 69391048us : process_csb: rcs0 cs-irq head=3, tail=4
<0>[  101.108762]   <idle>-0       6d.s2 69391049us : process_csb: rcs0 csb[4]: status=0x00000018:0x00000002, active=0x5
<0>[  101.110098]   <idle>-0       6d.s2 69391050us : process_csb: rcs0 out[0]: ctx=2.1, global=10 (fence 19:1) (current 10), prio=2
<0>[  101.111441]   <idle>-0       6d.s2 69391050us : process_csb: rcs0 completed ctx=2
<0>[  101.112769]   <idle>-0       6..s1 69391051us : execlists_submission_tasklet: bcs0 awake?=1, active=5
<0>[  101.114104]   <idle>-0       6d.s2 69391052us : process_csb: bcs0 cs-irq head=1, tail=2
<0>[  101.115428]   <idle>-0       6d.s2 69391052us : process_csb: bcs0 csb[2]: status=0x00000018:0x00000002, active=0x5
<0>[  101.116763]   <idle>-0       6d.s2 69391053us : process_csb: bcs0 out[0]: ctx=2.1, global=6 (fence 1b:1) (current 6), prio=2
<0>[  101.118077]   <idle>-0       6d.s2 69391053us : process_csb: bcs0 completed ctx=2
<0>[  101.119379]   <idle>-0       6..s1 69391054us : execlists_submission_tasklet: vcs0 awake?=1, active=5
<0>[  101.120699]   <idle>-0       6d.s2 69391054us : process_csb: vcs0 cs-irq head=1, tail=2
<0>[  101.122011]   <idle>-0       6d.s2 69391055us : process_csb: vcs0 csb[2]: status=0x00000018:0x00000002, active=0x5
<0>[  101.123334]   <idle>-0       6d.s2 69391056us : process_csb: vcs0 out[0]: ctx=2.1, global=6 (fence 1a:1) (current 6), prio=2
<0>[  101.124656]   <idle>-0       6d.s2 69391056us : process_csb: vcs0 completed ctx=2
<0>[  101.125976]   <idle>-0       6..s1 69391057us : execlists_submission_tasklet: vecs0 awake?=1, active=5
<0>[  101.127301]   <idle>-0       6d.s2 69391057us : process_csb: vecs0 cs-irq head=1, tail=2
<0>[  101.128629]   <idle>-0       6d.s2 69391058us : process_csb: vecs0 csb[2]: status=0x00000018:0x00000002, active=0x5
<0>[  101.129961]   <idle>-0       6d.s2 69391059us : process_csb: vecs0 out[0]: ctx=2.1, global=6 (fence 1c:1) (current 6), prio=2
<0>[  101.131280]   <idle>-0       6d.s2 69391059us : process_csb: vecs0 completed ctx=2
<0>[  101.132587] gem_exec-1139    7.... 69394672us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.133916] gem_exec-1139    7.... 69394872us : i915_request_retire: vecs0 fence 20:4, global=4, current 6
<0>[  101.135238] gem_exec-1139    7.... 69394878us : i915_request_retire: __retire_engine_request(vecs0) fence 20:4, global=4, current 6
<0>[  101.136579] gem_exec-1139    7.... 69394916us : i915_request_retire: vecs0 fence 20:5, global=5, current 6
<0>[  101.137917] gem_exec-1139    7.... 69394916us : i915_request_retire: marking gem_exec_schedu[1139]/1 as inactive
<0>[  101.139247] gem_exec-1139    7.... 69394920us : i915_request_retire: __retire_engine_request(vecs0) fence 20:5, global=5, current 6
<0>[  101.140594] gem_exec-1139    7.... 69394931us : i915_request_retire: bcs0 fence 1f:5, global=5, current 6
<0>[  101.141941] gem_exec-1139    7.... 69394931us : i915_request_retire: marking gem_exec_schedu[1139]/1 as inactive
<0>[  101.143282] gem_exec-1139    7.... 69394935us : i915_request_retire: __retire_engine_request(bcs0) fence 1f:5, global=5, current 6
<0>[  101.144641] gem_exec-1139    7.... 69394945us : i915_request_retire: vcs0 fence 1e:5, global=5, current 6
<0>[  101.145992] gem_exec-1139    7.... 69394945us : i915_request_retire: marking gem_exec_schedu[1139]/1 as inactive
<0>[  101.147337] gem_exec-1139    7.... 69394948us : i915_request_retire: __retire_engine_request(vcs0) fence 1e:5, global=5, current 6
<0>[  101.148690] gem_exec-1139    7.... 69394959us : i915_request_retire: rcs0 fence 1d:5, global=8, current 10
<0>[  101.150011] gem_exec-1139    7.... 69394959us : i915_request_retire: marking gem_exec_schedu[1139]/1 as inactive
<0>[  101.151340] gem_exec-1139    7.... 69395052us : i915_request_retire: vecs0 fence 1c:1, global=6, current 6
<0>[  101.152645] gem_exec-1139    7.... 69395053us : i915_request_retire: marking gem_exec_schedu[1139]/2 as inactive
<0>[  101.153953] gem_exec-1139    7.... 69395054us : i915_request_retire: __retire_engine_request(vecs0) fence 1c:1, global=6, current 6
<0>[  101.155284] gem_exec-1139    7.... 69395061us : i915_request_retire: bcs0 fence 1b:1, global=6, current 6
<0>[  101.156602] gem_exec-1139    7.... 69395062us : i915_request_retire: marking gem_exec_schedu[1139]/2 as inactive
<0>[  101.157903] gem_exec-1139    7.... 69395063us : i915_request_retire: __retire_engine_request(bcs0) fence 1b:1, global=6, current 6
<0>[  101.159236] gem_exec-1139    7.... 69395065us : i915_request_retire: vcs0 fence 1a:1, global=6, current 6
<0>[  101.160567] gem_exec-1139    7.... 69395065us : i915_request_retire: marking gem_exec_schedu[1139]/2 as inactive
<0>[  101.161889] gem_exec-1139    7.... 69395066us : i915_request_retire: __retire_engine_request(vcs0) fence 1a:1, global=6, current 6
<0>[  101.163216] gem_exec-1139    7.... 69395075us : i915_request_retire: rcs0 fence 19:1, global=10, current 10
<0>[  101.164542] gem_exec-1139    7.... 69395075us : i915_request_retire: marking gem_exec_schedu[1139]/2 as inactive
<0>[  101.165856] gem_exec-1139    7.... 69395101us : i915_request_retire: __retire_engine_request(rcs0) fence 19:1, global=10, current 10
<0>[  101.167192] gem_exec-1139    7.... 69395102us : i915_gem_park: 
<0>[  101.168508] gem_exec-1139    7.... 69395139us : reset_all_global_seqno.part.5: rcs0 seqno 10 (current 10) -> 0
<0>[  101.169839] gem_exec-1139    7.... 69395211us : reset_all_global_seqno.part.5: bcs0 seqno 6 (current 6) -> 0
<0>[  101.171170] gem_exec-1139    7.... 69395264us : reset_all_global_seqno.part.5: vcs0 seqno 6 (current 6) -> 0
<0>[  101.172486] gem_exec-1139    7.... 69395395us : reset_all_global_seqno.part.5: vecs0 seqno 6 (current 6) -> 0
<0>[  101.173812] gem_exec-1139    7.... 69403568us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.175131] gem_exec-1139    7.... 69403776us : reset_all_global_seqno.part.5: rcs0 seqno 0 (current 0) -> 0
<0>[  101.176458] gem_exec-1139    7.... 69403912us : reset_all_global_seqno.part.5: bcs0 seqno 0 (current 0) -> 0
<0>[  101.177755] gem_exec-1139    7.... 69403981us : reset_all_global_seqno.part.5: vcs0 seqno 0 (current 0) -> 0
<0>[  101.179027] gem_exec-1139    7.... 69404005us : reset_all_global_seqno.part.5: vecs0 seqno 0 (current 0) -> 0
<0>[  101.180294] gem_exec-1139    7.... 69411814us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.181581] gem_exec-1139    7.... 69412024us : reset_all_global_seqno.part.5: rcs0 seqno 0 (current 0) -> 0
<0>[  101.182875] gem_exec-1139    7.... 69412079us : reset_all_global_seqno.part.5: bcs0 seqno 0 (current 0) -> 0
<0>[  101.184149] gem_exec-1139    7.... 69412186us : reset_all_global_seqno.part.5: vcs0 seqno 0 (current 0) -> 0
<0>[  101.185403] gem_exec-1139    7.... 69412346us : reset_all_global_seqno.part.5: vecs0 seqno 0 (current 0) -> 0
<0>[  101.186642] gem_exec-1139    7.... 69419870us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.187895] gem_exec-1139    7.... 69420077us : reset_all_global_seqno.part.5: rcs0 seqno 0 (current 0) -> 0
<0>[  101.189148] gem_exec-1139    7.... 69420224us : reset_all_global_seqno.part.5: bcs0 seqno 0 (current 0) -> 0
<0>[  101.190390] gem_exec-1139    7.... 69420386us : reset_all_global_seqno.part.5: vcs0 seqno 0 (current 0) -> 0
<0>[  101.191605] gem_exec-1139    7.... 69420544us : reset_all_global_seqno.part.5: vecs0 seqno 0 (current 0) -> 0
<0>[  101.192810] gem_exec-1139    7.... 69428583us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.194030] gem_exec-1139    7.... 69428598us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.195233] gem_exec-1139    7.... 69428612us : reset_all_global_seqno.part.5: rcs0 seqno 0 (current 0) -> 0
<0>[  101.196437] gem_exec-1139    7.... 69428774us : reset_all_global_seqno.part.5: bcs0 seqno 0 (current 0) -> 0
<0>[  101.197615] gem_exec-1139    7.... 69428913us : reset_all_global_seqno.part.5: vcs0 seqno 0 (current 0) -> 0
<0>[  101.198799] gem_exec-1139    7.... 69428968us : reset_all_global_seqno.part.5: vecs0 seqno 0 (current 0) -> 0
<0>[  101.199968] kworker/-185     3.... 69429035us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.201132] kworker/-185     3.... 69429037us : i915_gem_switch_to_kernel_context: emit barrier on rcs0
<0>[  101.202288] kworker/-185     3.... 69429038us : i915_gem_unpark: 
<0>[  101.203440] kworker/-185     3.... 69429051us : i915_request_add: rcs0 fence 5:4
<0>[  101.204586] kworker/-185     3.... 69429053us : i915_request_add: marking (null) as active
<0>[  101.205723] kworker/-185     3d..1 69429065us : process_csb: rcs0 cs-irq head=4, tail=4
<0>[  101.206852] kworker/-185     3d..1 69429066us : __i915_request_submit: rcs0 fence 5:4 -> global=1, current 0
<0>[  101.208011] kworker/-185     3d..1 69429075us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=0.1, global=1 (fence 5:4) (current 0), prio=-4094
<0>[  101.209193] kworker/-185     3.... 69429079us : i915_gem_switch_to_kernel_context: emit barrier on bcs0
<0>[  101.210361] kworker/-185     3.... 69429086us : i915_request_add: bcs0 fence 8:3
<0>[  101.211512] kworker/-185     3.... 69429086us : i915_request_add: marking (null) as active
<0>[  101.212673] kworker/-185     3d..1 69429094us : process_csb: bcs0 cs-irq head=2, tail=2
<0>[  101.213848] kworker/-185     3d..1 69429094us : __i915_request_submit: bcs0 fence 8:3 -> global=1, current 0
<0>[  101.215011] kworker/-185     3d..1 69429118us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=0.1, global=1 (fence 8:3) (current 0), prio=-4094
<0>[  101.216197] kworker/-185     3.... 69429138us : i915_gem_switch_to_kernel_context: emit barrier on vcs0
<0>[  101.217363] kworker/-185     3.... 69429147us : i915_request_add: vcs0 fence b:2
<0>[  101.218525] kworker/-185     3.... 69429149us : i915_request_add: marking (null) as active
<0>[  101.219679] kworker/-185     3d..1 69429160us : process_csb: vcs0 cs-irq head=2, tail=2
<0>[  101.220843] kworker/-185     3d..1 69429161us : __i915_request_submit: vcs0 fence b:2 -> global=1, current 0
<0>[  101.222012] kworker/-185     3d..1 69429170us : __execlists_submission_tasklet: vcs0 in[0]:  ctx=0.1, global=1 (fence b:2) (current 0), prio=-4094
<0>[  101.223201] kworker/-185     3.... 69429173us : i915_gem_switch_to_kernel_context: emit barrier on vecs0
<0>[  101.224384] kworker/-185     3.... 69429180us : i915_request_add: vecs0 fence e:2
<0>[  101.225568] kworker/-185     3.... 69429180us : i915_request_add: marking (null) as active
<0>[  101.226744]   <idle>-0       6..s1 69429184us : execlists_submission_tasklet: rcs0 awake?=1, active=1
<0>[  101.227920]   <idle>-0       6d.s2 69429186us : process_csb: rcs0 cs-irq head=4, tail=5
<0>[  101.229106]   <idle>-0       6d.s2 69429187us : process_csb: rcs0 csb[5]: status=0x00000001:0x00000000, active=0x1
<0>[  101.230289] kworker/-185     3d..1 69429187us : process_csb: vecs0 cs-irq head=2, tail=2
<0>[  101.231469] kworker/-185     3d..1 69429188us : __i915_request_submit: vecs0 fence e:2 -> global=1, current 0
<0>[  101.232670]   <idle>-0       6..s1 69429188us : execlists_submission_tasklet: bcs0 awake?=1, active=1
<0>[  101.233875]   <idle>-0       6d.s2 69429189us : process_csb: bcs0 cs-irq head=2, tail=4
<0>[  101.235060]   <idle>-0       6d.s2 69429189us : process_csb: bcs0 csb[3]: status=0x00000001:0x00000000, active=0x1
<0>[  101.236260]   <idle>-0       6d.s2 69429190us : process_csb: bcs0 csb[4]: status=0x00000018:0x00000000, active=0x5
<0>[  101.237445]   <idle>-0       6d.s2 69429192us : process_csb: bcs0 out[0]: ctx=0.1, global=1 (fence 8:3) (current 1), prio=-4094
<0>[  101.238648]   <idle>-0       6d.s2 69429194us : process_csb: bcs0 completed ctx=0
<0>[  101.239830] kworker/-185     3d..1 69429195us : __execlists_submission_tasklet: vecs0 in[0]:  ctx=0.1, global=1 (fence e:2) (current 0), prio=-4094
<0>[  101.241056]   <idle>-0       6..s1 69429196us : execlists_submission_tasklet: vcs0 awake?=1, active=1
<0>[  101.242277]   <idle>-0       6d.s2 69429198us : process_csb: vcs0 cs-irq head=2, tail=4
<0>[  101.243506]   <idle>-0       6d.s2 69429199us : process_csb: vcs0 csb[3]: status=0x00000001:0x00000000, active=0x1
<0>[  101.244722] kworker/-185     3.... 69429199us : i915_gem_idle_work_handler: active_requests=4 (after switch-to-kernel-context)
<0>[  101.245968]   <idle>-0       6d.s2 69429199us : process_csb: vcs0 csb[4]: status=0x00000018:0x00000000, active=0x5
<0>[  101.247222]   <idle>-0       6d.s2 69429200us : process_csb: vcs0 out[0]: ctx=0.1, global=1 (fence b:2) (current 1), prio=-4094
<0>[  101.248458]   <idle>-0       6d.s2 69429201us : process_csb: vcs0 completed ctx=0
<0>[  101.249682]   <idle>-0       6..s1 69429216us : execlists_submission_tasklet: rcs0 awake?=1, active=5
<0>[  101.250927]   <idle>-0       6d.s2 69429217us : process_csb: rcs0 cs-irq head=5, tail=0
<0>[  101.252188]   <idle>-0       6d.s2 69429218us : process_csb: rcs0 csb[0]: status=0x00000018:0x00000000, active=0x5
<0>[  101.253451]   <idle>-0       6d.s2 69429219us : process_csb: rcs0 out[0]: ctx=0.1, global=1 (fence 5:4) (current 1), prio=-4094
<0>[  101.254730]   <idle>-0       6d.s2 69429220us : process_csb: rcs0 completed ctx=0
<0>[  101.255979]   <idle>-0       6..s1 69429220us : execlists_submission_tasklet: vecs0 awake?=1, active=1
<0>[  101.257250]   <idle>-0       6d.s2 69429221us : process_csb: vecs0 cs-irq head=2, tail=4
<0>[  101.258536]   <idle>-0       6d.s2 69429222us : process_csb: vecs0 csb[3]: status=0x00000001:0x00000000, active=0x1
<0>[  101.259822]   <idle>-0       6d.s2 69429223us : process_csb: vecs0 csb[4]: status=0x00000018:0x00000000, active=0x5
<0>[  101.261094]   <idle>-0       6d.s2 69429224us : process_csb: vecs0 out[0]: ctx=0.1, global=1 (fence e:2) (current 1), prio=-4094
<0>[  101.262377]   <idle>-0       6d.s2 69429225us : process_csb: vecs0 completed ctx=0
<0>[  101.263662] kworker/-185     3.... 69429230us : i915_request_retire: vecs0 fence e:2, global=1, current 1
<0>[  101.264971] kworker/-185     3.... 69429231us : i915_request_retire: marking (null) as inactive
<0>[  101.266274] kworker/-185     3.... 69429233us : i915_request_retire: __retire_engine_request(vecs0) fence e:2, global=1, current 1
<0>[  101.267588] kworker/-185     3.... 69429243us : i915_request_retire: vcs0 fence b:2, global=1, current 1
<0>[  101.268890] kworker/-185     3.... 69429243us : i915_request_retire: marking (null) as inactive
<0>[  101.270188] kworker/-185     3.... 69429244us : i915_request_retire: __retire_engine_request(vcs0) fence b:2, global=1, current 1
<0>[  101.271511] kworker/-185     3.... 69429250us : i915_request_retire: bcs0 fence 8:3, global=1, current 1
<0>[  101.272850] kworker/-185     3.... 69429250us : i915_request_retire: marking (null) as inactive
<0>[  101.274170] kworker/-185     3.... 69429251us : i915_request_retire: __retire_engine_request(bcs0) fence 8:3, global=1, current 1
<0>[  101.275515] kworker/-185     3.... 69429256us : i915_request_retire: rcs0 fence 5:4, global=1, current 1
<0>[  101.276848] kworker/-185     3.... 69429256us : i915_request_retire: marking (null) as inactive
<0>[  101.278182] kworker/-185     3.... 69429257us : i915_request_retire: __retire_engine_request(rcs0) fence 5:4, global=1, current 1
<0>[  101.279543] kworker/-185     3.... 69429263us : i915_gem_park: 
<0>[  101.280884] kworker/-62      0.... 69429380us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.282245] kworker/-62      0.... 69429382us : i915_gem_idle_work_handler: active_requests=0 (after switch-to-kernel-context)
<0>[  101.283615] kworker/-62      0.... 69429396us : __i915_gem_park: 
<0>[  101.284962] kms_fron-1145    4.... 69583424us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.286363] kms_fron-1145    4.... 69643856us : i915_gem_unpark: 
<0>[  101.287762] kms_fron-1145    4.... 69643970us : i915_request_add: bcs0 fence 22:1
<0>[  101.289162] kms_fron-1145    4.... 69643972us : i915_request_add: marking kms_frontbuffer[1145]/0 as active
<0>[  101.290566] kms_fron-1145    4d..1 69643976us : process_csb: bcs0 cs-irq head=4, tail=4
<0>[  101.291956] kms_fron-1145    4d..1 69643977us : __i915_request_submit: bcs0 fence 22:1 -> global=2, current 1
<0>[  101.293345] kms_fron-1145    4d..1 69643982us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=2.1, global=2 (fence 22:1) (current 1), prio=2
<0>[  101.294755]   <idle>-0       6..s1 69644022us : execlists_submission_tasklet: bcs0 awake?=1, active=1
<0>[  101.296173]   <idle>-0       6d.s2 69644024us : process_csb: bcs0 cs-irq head=4, tail=5
<0>[  101.297580]   <idle>-0       6d.s2 69644025us : process_csb: bcs0 csb[5]: status=0x00000001:0x00000000, active=0x1
<0>[  101.299004]   <idle>-0       6..s1 69644953us : execlists_submission_tasklet: bcs0 awake?=1, active=5
<0>[  101.300406]   <idle>-0       6d.s2 69644955us : process_csb: bcs0 cs-irq head=5, tail=0
<0>[  101.301800]   <idle>-0       6d.s2 69644956us : process_csb: bcs0 csb[0]: status=0x00000018:0x00000002, active=0x5
<0>[  101.303203]   <idle>-0       6d.s2 69644957us : process_csb: bcs0 out[0]: ctx=2.1, global=2 (fence 22:1) (current 2), prio=2
<0>[  101.304612]   <idle>-0       6d.s2 69644958us : process_csb: bcs0 completed ctx=2
<0>[  101.306016] kworker/-62      3.... 71305358us : i915_request_retire: bcs0 fence 22:1, global=2, current 2
<0>[  101.307437] kworker/-62      3.... 71305362us : i915_request_retire: marking kms_frontbuffer[1145]/0 as inactive
<0>[  101.308838] kworker/-62      3.... 71306213us : i915_request_retire: __retire_engine_request(bcs0) fence 22:1, global=2, current 2
<0>[  101.310240] kworker/-62      3.... 71306217us : i915_gem_park: 
<0>[  101.311612] kworker/-62      3.... 71409253us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.313007] kworker/-62      3.... 71409257us : i915_gem_switch_to_kernel_context: emit barrier on bcs0
<0>[  101.314394] kworker/-62      3.... 71409259us : i915_gem_unpark: 
<0>[  101.315767] kworker/-62      3.... 71409296us : i915_request_add: bcs0 fence 8:4
<0>[  101.317135] kworker/-62      3.... 71409300us : i915_request_add: marking (null) as active
<0>[  101.318501] kworker/-62      3d..1 71409340us : process_csb: bcs0 cs-irq head=0, tail=0
<0>[  101.319852] kworker/-62      3d..1 71409342us : __i915_request_submit: bcs0 fence 8:4 -> global=3, current 2
<0>[  101.321221] kworker/-62      3d..1 71409366us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=0.1, global=3 (fence 8:4) (current 2), prio=-4094
<0>[  101.322616] kworker/-62      3.... 71409375us : i915_gem_idle_work_handler: active_requests=1 (after switch-to-kernel-context)
<0>[  101.324011]   <idle>-0       6..s1 71409546us : execlists_submission_tasklet: bcs0 awake?=1, active=1
<0>[  101.325418]   <idle>-0       6d.s2 71409550us : process_csb: bcs0 cs-irq head=0, tail=2
<0>[  101.326819]   <idle>-0       6d.s2 71409552us : process_csb: bcs0 csb[1]: status=0x00000001:0x00000000, active=0x1
<0>[  101.328236]   <idle>-0       6d.s2 71409553us : process_csb: bcs0 csb[2]: status=0x00000018:0x00000000, active=0x5
<0>[  101.329616]   <idle>-0       6d.s2 71409556us : process_csb: bcs0 out[0]: ctx=0.1, global=3 (fence 8:4) (current 3), prio=-4094
<0>[  101.330985]   <idle>-0       6d.s2 71409558us : process_csb: bcs0 completed ctx=0
<0>[  101.332342] kms_fron-1145    2.... 72609771us : i915_request_add: bcs0 fence 22:2
<0>[  101.333706] kms_fron-1145    2.... 72609774us : i915_request_add: marking kms_frontbuffer[1145]/0 as active
<0>[  101.335060] kms_fron-1145    2d..1 72609778us : process_csb: bcs0 cs-irq head=2, tail=2
<0>[  101.336390] kms_fron-1145    2d..1 72609779us : __i915_request_submit: bcs0 fence 22:2 -> global=4, current 3
<0>[  101.337745] kms_fron-1145    2d..1 72609783us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=2.1, global=4 (fence 22:2) (current 3), prio=2
<0>[  101.339127]   <idle>-0       6..s1 72609867us : execlists_submission_tasklet: bcs0 awake?=1, active=1
<0>[  101.340503]   <idle>-0       6d.s2 72609869us : process_csb: bcs0 cs-irq head=2, tail=3
<0>[  101.341871]   <idle>-0       6d.s2 72609870us : process_csb: bcs0 csb[3]: status=0x00000001:0x00000000, active=0x1
<0>[  101.343240]   <idle>-0       6..s1 72611744us : execlists_submission_tasklet: bcs0 awake?=1, active=5
<0>[  101.344624]   <idle>-0       6d.s2 72611745us : process_csb: bcs0 cs-irq head=3, tail=4
<0>[  101.345987]   <idle>-0       6d.s2 72611746us : process_csb: bcs0 csb[4]: status=0x00000018:0x00000002, active=0x5
<0>[  101.347363]   <idle>-0       6d.s2 72611747us : process_csb: bcs0 out[0]: ctx=2.1, global=4 (fence 22:2) (current 4), prio=2
<0>[  101.348736]   <idle>-0       6d.s2 72611748us : process_csb: bcs0 completed ctx=2
<0>[  101.350095] kworker/-62      3.... 73289135us : i915_request_retire: bcs0 fence 22:2, global=4, current 4
<0>[  101.351490] kworker/-62      3.... 73289137us : i915_request_retire: marking kms_frontbuffer[1145]/0 as inactive
<0>[  101.352871] kworker/-62      3.... 73289146us : i915_request_retire: __retire_engine_request(bcs0) fence 8:4, global=3, current 4
<0>[  101.354263] kworker/-62      3.... 73289149us : i915_request_retire: __retire_engine_request(bcs0) fence 22:2, global=4, current 4
<0>[  101.355649] kworker/-62      3.... 73289161us : i915_request_retire: bcs0 fence 8:4, global=3, current 4
<0>[  101.357020] kworker/-62      3.... 73289161us : i915_request_retire: marking (null) as inactive
<0>[  101.358373] kworker/-62      3.... 73289162us : i915_gem_park: 
<0>[  101.359719] kworker/-62      3.... 73394129us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.361069] kworker/-62      3.... 73394130us : i915_gem_switch_to_kernel_context: emit barrier on bcs0
<0>[  101.362439] kworker/-62      3.... 73394131us : i915_gem_unpark: 
<0>[  101.363762] kworker/-62      3.... 73394144us : i915_request_add: bcs0 fence 8:5
<0>[  101.365098] kworker/-62      3.... 73394147us : i915_request_add: marking (null) as active
<0>[  101.366432] kworker/-62      3d..1 73394163us : process_csb: bcs0 cs-irq head=4, tail=4
<0>[  101.367756] kworker/-62      3d..1 73394164us : __i915_request_submit: bcs0 fence 8:5 -> global=5, current 4
<0>[  101.369096] kworker/-62      3d..1 73394177us : __execlists_submission_tasklet: bcs0 in[0]:  ctx=0.1, global=5 (fence 8:5) (current 4), prio=-4094
<0>[  101.370454] kworker/-62      3.... 73394182us : i915_gem_idle_work_handler: active_requests=1 (after switch-to-kernel-context)
<0>[  101.371815]   <idle>-0       6..s1 73394273us : execlists_submission_tasklet: bcs0 awake?=1, active=1
<0>[  101.373180]   <idle>-0       6d.s2 73394275us : process_csb: bcs0 cs-irq head=4, tail=0
<0>[  101.374537]   <idle>-0       6d.s2 73394276us : process_csb: bcs0 csb[5]: status=0x00000001:0x00000000, active=0x1
<0>[  101.375901]   <idle>-0       6d.s2 73394276us : process_csb: bcs0 csb[0]: status=0x00000018:0x00000000, active=0x5
<0>[  101.377241]   <idle>-0       6d.s2 73394278us : process_csb: bcs0 out[0]: ctx=0.1, global=5 (fence 8:5) (current 5), prio=-4094
<0>[  101.378587]   <idle>-0       6d.s2 73394278us : process_csb: bcs0 completed ctx=0
<0>[  101.379900] kms_fron-1145    6.... 73556022us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.381255] kms_fron-1145    6.... 73556038us : i915_request_retire: bcs0 fence 8:5, global=5, current 5
<0>[  101.382616] kms_fron-1145    6.... 73556039us : i915_request_retire: marking (null) as inactive
<0>[  101.383961] kms_fron-1145    6.... 73556040us : i915_request_retire: __retire_engine_request(bcs0) fence 8:5, global=5, current 5
<0>[  101.385313] kms_fron-1145    6.... 73556050us : i915_gem_park: 
<0>[  101.386622] kms_fron-1145    6.... 73556062us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.387957] kms_fron-1145    6.... 73556075us : reset_all_global_seqno.part.5: rcs0 seqno 1 (current 1) -> 0
<0>[  101.389293] kms_fron-1145    6.... 73556098us : reset_all_global_seqno.part.5: bcs0 seqno 5 (current 5) -> 0
<0>[  101.390605] kms_fron-1145    6.... 73556153us : reset_all_global_seqno.part.5: vcs0 seqno 1 (current 1) -> 0
<0>[  101.391912] kms_fron-1145    6.... 73556183us : reset_all_global_seqno.part.5: vecs0 seqno 1 (current 1) -> 0
<0>[  101.393215] kworker/-62      3.... 73556315us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.394525] kworker/-62      3.... 73556317us : i915_gem_idle_work_handler: active_requests=0 (after switch-to-kernel-context)
<0>[  101.395836] kworker/-62      3.... 73556331us : __i915_gem_park: 
<0>[  101.397119] gem_exec-1146    1.... 73729897us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.398438] gem_exec-1146    1.... 73730561us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.399747] gem_exec-1146    1.... 73730564us : reset_all_global_seqno.part.5: rcs0 seqno 0 (current 0) -> 0
<0>[  101.401047] gem_exec-1146    1.... 73730860us : reset_all_global_seqno.part.5: bcs0 seqno 0 (current 0) -> 0
<0>[  101.402349] gem_exec-1146    1.... 73730925us : reset_all_global_seqno.part.5: vcs0 seqno 0 (current 0) -> 0
<0>[  101.403643] gem_exec-1146    1.... 73731017us : reset_all_global_seqno.part.5: vecs0 seqno 0 (current 0) -> 0
<0>[  101.404936] gem_exec-1146    1.... 73740812us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.406240] gem_exec-1146    1.... 73747275us : i915_gem_unpark: 
<0>[  101.407533] gem_exec-1146    1.... 73747556us : i915_request_add: rcs0 fence 23:1
<0>[  101.408825] gem_exec-1146    1.... 73747560us : i915_request_add: marking gem_exec_store[1146]/0 as active
<0>[  101.410121] gem_exec-1146    1d..1 73747567us : process_csb: rcs0 cs-irq head=0, tail=0
<0>[  101.411403] gem_exec-1146    1d..1 73747568us : __i915_request_submit: rcs0 fence 23:1 -> global=1, current 0
<0>[  101.412699] gem_exec-1146    1d..1 73747575us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=2.1, global=1 (fence 23:1) (current 0), prio=2
<0>[  101.414013]     java-944     6..s. 73747786us : execlists_submission_tasklet: rcs0 awake?=1, active=1
<0>[  101.415310]     java-944     6d.s1 73747788us : process_csb: rcs0 cs-irq head=0, tail=1
<0>[  101.416616]     java-944     6d.s1 73747790us : process_csb: rcs0 csb[1]: status=0x00000001:0x00000000, active=0x1
<0>[  101.417932] gem_exec-1146    1.... 73747831us : i915_request_retire_upto: rcs0 fence 23:1, global=1, current 1
<0>[  101.419247] gem_exec-1146    1.... 73747832us : i915_request_retire: rcs0 fence 23:1, global=1, current 1
<0>[  101.420574]     java-944     6..s. 73747833us : execlists_submission_tasklet: rcs0 awake?=1, active=5
<0>[  101.421904] gem_exec-1146    1.... 73747834us : i915_request_retire: marking gem_exec_store[1146]/0 as inactive
<0>[  101.423232]     java-944     6d.s1 73747835us : process_csb: rcs0 cs-irq head=1, tail=2
<0>[  101.424559]     java-944     6d.s1 73747836us : process_csb: rcs0 csb[2]: status=0x00000018:0x00000002, active=0x5
<0>[  101.425902]     java-944     6d.s1 73747838us : process_csb: rcs0 out[0]: ctx=2.1, global=1 (fence 23:1) (current 1), prio=2
<0>[  101.427230]     java-944     6d.s1 73747839us : process_csb: rcs0 completed ctx=2
<0>[  101.428530] gem_exec-1146    1.... 73747964us : i915_request_retire: __retire_engine_request(rcs0) fence 23:1, global=1, current 1
<0>[  101.429851] gem_exec-1146    1.... 73747966us : i915_gem_park: 
<0>[  101.431151] gem_exec-1146    1.... 73749309us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.432470] kworker/-185     7.... 73749571us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.433789] kworker/-185     7.... 73749573us : i915_gem_switch_to_kernel_context: emit barrier on rcs0
<0>[  101.435111] kworker/-185     7.... 73749574us : i915_gem_unpark: 
<0>[  101.436418] kworker/-185     7.... 73749592us : i915_request_add: rcs0 fence 5:5
<0>[  101.437713] kworker/-185     7.... 73749594us : i915_request_add: marking (null) as active
<0>[  101.439008] kworker/-185     7d..1 73749615us : process_csb: rcs0 cs-irq head=2, tail=2
<0>[  101.440297] kworker/-185     7d..1 73749617us : __i915_request_submit: rcs0 fence 5:5 -> global=2, current 1
<0>[  101.441606] kworker/-185     7d..1 73749630us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=0.1, global=2 (fence 5:5) (current 1), prio=-4094
<0>[  101.442944] kworker/-185     7.... 73749636us : i915_gem_idle_work_handler: active_requests=1 (after switch-to-kernel-context)
<0>[  101.444289]     java-944     6..s. 73749729us : execlists_submission_tasklet: rcs0 awake?=1, active=1
<0>[  101.445629]     java-944     6d.s1 73749730us : process_csb: rcs0 cs-irq head=2, tail=3
<0>[  101.446974]     java-944     6d.s1 73749731us : process_csb: rcs0 csb[3]: status=0x00000001:0x00000000, active=0x1
<0>[  101.448330]     java-944     6..s. 73749756us : execlists_submission_tasklet: rcs0 awake?=1, active=5
<0>[  101.449686]     java-944     6d.s1 73749757us : process_csb: rcs0 cs-irq head=3, tail=4
<0>[  101.451053]     java-944     6d.s1 73749758us : process_csb: rcs0 csb[4]: status=0x00000018:0x00000000, active=0x5
<0>[  101.452448]     java-944     6d.s1 73749759us : process_csb: rcs0 out[0]: ctx=0.1, global=2 (fence 5:5) (current 2), prio=-4094
<0>[  101.453845]     java-944     6d.s1 73749760us : process_csb: rcs0 completed ctx=0
<0>[  101.455220] kworker/-185     7.... 73749809us : i915_request_retire: rcs0 fence 5:5, global=2, current 2
<0>[  101.456616] kworker/-185     7.... 73749811us : i915_request_retire: marking (null) as inactive
<0>[  101.457978] kworker/-185     7.... 73749812us : i915_request_retire: __retire_engine_request(rcs0) fence 5:5, global=2, current 2
<0>[  101.459347] kworker/-185     7.... 73749816us : i915_gem_park: 
<0>[  101.460706] kworker/-185     7.... 73749919us : i915_gem_switch_to_kernel_context: awake?=yes
<0>[  101.462060] kworker/-185     7.... 73749921us : i915_gem_idle_work_handler: active_requests=0 (after switch-to-kernel-context)
<0>[  101.463436] kworker/-185     7.... 73749941us : __i915_gem_park: 
<0>[  101.464788] gem_exec-1146    1.... 73771423us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.466181] gem_exec-1146    1.... 73771425us : i915_gem_wait_for_idle: flags=3 (locked), timeout=9223372036854775807 (forever)
<0>[  101.467572] gem_exec-1146    1.... 73771426us : reset_all_global_seqno.part.5: rcs0 seqno 2 (current 2) -> 0
<0>[  101.468944] gem_exec-1146    1.... 73771629us : reset_all_global_seqno.part.5: bcs0 seqno 0 (current 0) -> 0
<0>[  101.470289] gem_exec-1146    1.... 73771653us : reset_all_global_seqno.part.5: vcs0 seqno 0 (current 0) -> 0
<0>[  101.471627] gem_exec-1146    1.... 73771679us : reset_all_global_seqno.part.5: vecs0 seqno 0 (current 0) -> 0
<0>[  101.472938] ---------------------------------
<0>[  101.474217] Kernel Offset: 0x13000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
<4>[  101.475525] CPU: 6 PID: 944 Comm: java Tainted: G     U       L    4.20.0-rc3-gd63a7489bf1d-drmtip_148+ #1
<4>[  101.476821] Hardware name: TOSHIBA SATELLITE P50-C/06F4                            , BIOS 1.50 07/05/2017
<4>[  101.478147] Call Trace:
<4>[  101.479454]  <IRQ>
<4>[  101.480756]  dump_stack+0x67/0x9b
<4>[  101.482041]  panic+0x12b/0x24d
<4>[  101.483324]  watchdog_timer_fn+0x2e2/0x2f0
<4>[  101.484593]  __hrtimer_run_queues+0x11e/0x4a0
<4>[  101.485846]  hrtimer_interrupt+0xea/0x250
<4>[  101.487104]  smp_apic_timer_interrupt+0x7b/0x250
<4>[  101.488349]  apic_timer_interrupt+0xf/0x20
<4>[  101.489602] RIP: 0010:clocksource_watchdog+0x20/0x310
<4>[  101.490857] Code: c3 66 0f 1f 84 00 00 00 00 00 41 57 41 56 48 c7 c7 60 af 24 95 41 55 41 54 55 53 48 83 ec 10 e8 26 58 85 00 8b 35 dc bd 59 02 <85> f6 0f 84 f6 01 00 00 48 8b 05 21 7e 12 01 44 8b 25 c2 bd 59 02
<4>[  101.492228] RSP: 0000:ffffa125b9b83e30 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
<4>[  101.493601] RAX: ffffa125b6ed0040 RBX: 0000000000000100 RCX: a3d06af600000000
<4>[  101.494966] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff9524af60
<4>[  101.496335] RBP: ffffa125b9b83ec0 R08: 00000000ba319f7c R09: 0000000000000000
<4>[  101.497715] R10: ffffa125b9b83db0 R11: ffffffff9524af78 R12: ffffa125b9b83f08
<4>[  101.499081] R13: ffffffff94123190 R14: ffffffff966befa0 R15: ffffffff94123190
<4>[  101.500439]  ? apic_timer_interrupt+0xa/0x20
<4>[  101.501779]  ? __clocksource_unstable+0x60/0x60
<4>[  101.503091]  ? __clocksource_unstable+0x60/0x60
<4>[  101.504384]  ? __clocksource_unstable+0x60/0x60
<4>[  101.505661]  ? __clocksource_unstable+0x60/0x60
<4>[  101.506928]  ? __clocksource_unstable+0x60/0x60
<4>[  101.508170]  call_timer_fn+0x93/0x2e0
<4>[  101.509397]  expire_timers+0xc1/0x190
<4>[  101.510604]  run_timer_softirq+0xc7/0x170
<4>[  101.511795]  __do_softirq+0xd8/0x4b9
<4>[  101.512979]  irq_exit+0xa9/0xc0
<4>[  101.514143]  smp_apic_timer_interrupt+0x9c/0x250
<4>[  101.515308]  apic_timer_interrupt+0xf/0x20
<4>[  101.516465]  </IRQ>
<4>[  101.517587] RIP: 0033:0x7f992a3148d9
<4>[  101.518706] Code: 4b 28 89 c8 c1 e8 05 41 3b 47 10 0f 83 53 01 00 00 49 8b 77 18 83 e1 1f ba 01 00 00 00 48 d3 e2 48 8d 34 86 8b 06 89 c7 21 d0 <09> d7 89 3e 8b 4b 28 41 3b 4d 28 72 82 48 8b 55 a0 4d 89 fc 49 89
<4>[  101.519935] RSP: 002b:00007f9908f26eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
<4>[  101.521167] RAX: 0000000000000000 RBX: 00007f98c408c978 RCX: 0000000000000009
<4>[  101.522394] RDX: 0000000000000200 RSI: 00007f98c4bbeea0 RDI: 00000000fffffc00
<4>[  101.523623] RBP: 00007f9908f26f20 R08: 0000000000000010 R09: 0000000000000000
<4>[  101.524865] R10: 0000000000000000 R11: 00007f98c4bc0610 R12: 0000000000000000
<4>[  101.526094] R13: 00007f9908f273e0 R14: 00007f98c408cb40 R15: 00007f9908f26fa0
Comment 3 CI Bug Log 2019-01-15 16:37:32 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

* https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4762/shard-skl5/igt@gem_wait@wait-blt.html
* https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5364/shard-skl7/igt@gem_exec_async@concurrent-writes-vebox.html
Comment 4 CI Bug Log 2019-02-15 17:41:55 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

* https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_5605/shard-skl2/igt@kms_rotation_crc@sprite-rotation-270.html
Comment 5 CI Bug Log 2019-04-26 06:24:43 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-skl1/igt@gem_mocs_settings@mocs-isolation-render.html
Comment 7 Lakshmi 2019-04-29 08:22:28 UTC
Bumping the priority to high as it is seen on Shards.
Comment 8 CI Bug Log 2019-04-29 08:24:20 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4968/shard-skl2/igt@gem_pipe_control_store_loop@reused-buffer.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6005/shard-skl2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-mmap-cpu.html
Comment 9 CI Bug Log 2019-04-29 11:01:35 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4968/shard-skl4/igt@gem_exec_schedule@preemptive-hang-blt.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6007/shard-skl6/igt@i915_selftest@mock_contexts.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_267/fi-skl-6700k2/igt@kms_busy@basic-modeset-c.html
Comment 12 CI Bug Log 2019-06-11 06:56:45 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6204/shard-skl6/igt@kms_psr@suspend.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5050/shard-skl9/igt@kms_psr@suspend.html
Comment 13 CI Bug Log 2019-06-11 07:52:36 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6208/shard-skl7/igt@gem_eio@hibernate.html
Comment 14 CI Bug Log 2019-06-24 10:28:19 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6317/shard-skl8/igt@kms_flip@plain-flip-interruptible.html
Comment 15 CI Bug Log 2019-06-24 11:07:28 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}


  No new failures caught with the new filter
Comment 16 CI Bug Log 2019-06-24 11:07:42 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6320/shard-skl4/igt@kms_big_fb@yf-tiled-16bpp-rotate-90.html
Comment 17 CI Bug Log 2019-07-29 07:30:40 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6555/shard-skl3/igt@gem_softpin@evict-active.html
Comment 18 CI Bug Log 2019-07-29 09:44:14 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_336/fi-skl-6260u/igt@drm_import_export@prime.html
Comment 19 CI Bug Log 2019-08-05 06:04:51 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6603/shard-skl4/igt@prime_nv_pcopy@test3_2.html
Comment 20 CI Bug Log 2019-08-19 09:18:50 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_346/fi-skl-6700k2/igt@gem_exec_schedule@smoketest-all.html
Comment 21 CI Bug Log 2019-08-19 14:49:10 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6714/shard-skl3/igt@kms_vblank@pipe-a-wait-forked-busy-hang.html
Comment 22 CI Bug Log 2019-08-19 14:49:13 UTC
The CI Bug Log issue associated to this bug has been updated.

### Removed filters

* SKL: random tests - incomplete (added on 8 seconds ago)

### New filters associated

* SKL: random tests - incomplete
  (No new failures associated)
Comment 23 CI Bug Log 2019-09-05 07:45:08 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6834/shard-skl5/igt@gem_userptr_blits@unsync-unmap-cycles.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5169/shard-skl3/igt@kms_vblank@pipe-c-query-forked.html
Comment 24 CI Bug Log 2019-09-24 11:20:06 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6938/shard-skl5/igt@kms_ccs@pipe-a-missing-ccs-buffer.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6940/shard-skl6/igt@i915_pm_dc@dc6-psr.html
Comment 25 CI Bug Log 2019-10-01 08:25:53 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6974/shard-skl7/igt@kms_cursor_crc@pipe-c-cursor-64x21-sliding.html
Comment 27 CI Bug Log 2019-10-14 13:39:14 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: igt@i915_pm_dc@dc[56]-(psr|dpms) - system hang - incomplete - No relevant logs related to the failure -}
{+ SKL: igt@* - system hang - incomplete - No relevant logs related to the failure +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5216/shard-skl5/igt@drm_import_export@import-close-race-prime.html
  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7079/shard-skl5/igt@gem_ctx_switch@legacy-bsd1-heavy-queue.html
Comment 28 CI Bug Log 2019-10-18 13:42:13 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: igt@* - system hang - incomplete - No relevant logs related to the failure -}
{+ SKL: igt@* - system hang - incomplete - No relevant logs related to the failure +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7117/shard-skl10/igt@i915_pm_dc@dc5-dpms.html
Comment 29 CI Bug Log 2019-10-22 09:37:14 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5234/shard-skl9/igt@gem_eio@kms.html
Comment 30 CI Bug Log 2019-10-23 12:24:00 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_391/fi-skl-guc/igt@kms_flip@2x-flip-vs-suspend-interruptible.html
Comment 31 CI Bug Log 2019-10-23 13:48:32 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7158/fi-skl-6260u/igt@kms_busy@basic-flip-b.html
Comment 32 CI Bug Log 2019-10-23 16:31:03 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7159/fi-skl-6260u/igt@kms_busy@basic-flip-a.html
Comment 33 CI Bug Log 2019-10-31 10:46:00 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7218/shard-skl3/igt@perf_pmu@enable-race-vcs0.html
Comment 34 CI Bug Log 2019-11-04 12:16:17 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7233/shard-skl1/igt@kms_universal_plane@universal-plane-pipe-c-sanity.html
Comment 35 CI Bug Log 2019-11-04 12:56:29 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7242/shard-skl8/igt@i915_selftest@mock_requests.html
Comment 36 CI Bug Log 2019-11-05 14:37:34 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: igt@* - system hang - incomplete - No relevant logs related to the failure -}
{+ SKL: igt@* - system hang - incomplete - No relevant logs related to the failure +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7252/shard-skl5/igt@gem_cpu_reloc@full.html
Comment 37 CI Bug Log 2019-11-08 09:19:29 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7278/shard-skl3/igt@gem_exec_parallel@vcs0.html
Comment 38 CI Bug Log 2019-11-12 07:32:25 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7294/shard-skl5/igt@kms_plane_cursor@pipe-b-primary-size-64.html
Comment 39 CI Bug Log 2019-11-13 11:37:15 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7316/shard-skl6/igt@gem_ctx_switch@queue-heavy.html
Comment 40 CI Bug Log 2019-11-14 16:33:11 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_403/fi-skl-6770hq/igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw.html
Comment 41 CI Bug Log 2019-11-21 07:24:30 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: random tests - incomplete - system hang -}
{+ SKL: random tests - incomplete - system hang +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7379/shard-skl3/igt@kms_flip@flip-vs-modeset-vs-hang-interruptible.html
Comment 42 CI Bug Log 2019-11-21 10:16:39 UTC
A CI Bug Log filter associated to this bug has been updated:

{- SKL: igt@* - system hang - incomplete - No relevant logs related to the failure -}
{+ SKL: igt@* - system hang - incomplete - No relevant logs related to the failure +}

New failures caught by the filter:

  * https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_405/fi-skl-6770hq/igt@gem_exec_blt@normal-max.html
Comment 44 Martin Peres 2019-11-29 18:01:29 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/drm/intel/issues/198.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.