Starting subtest: basic-parallel (gem_exec_nop:1208) igt_aux-CRITICAL: Test assertion failure function sig_abort, file ../lib/igt_aux.c:500: (gem_exec_nop:1208) igt_aux-CRITICAL: Failed assertion: !"GPU hung" Subtest basic-parallel failed. <6> [97.627476] [IGT] gem_exec_nop: starting subtest basic-parallel <7> [106.387503] hangcheck vecs0 <7> [106.387539] hangcheck \x09current seqno 72fb, last 72fb, hangcheck 72fb [4032 ms] <7> [106.387574] hangcheck \x09Reset count: 0 (global 0) <7> [106.387598] hangcheck \x09Requests: <7> [106.387624] hangcheck \x09RING_START: 0x00800000 <7> [106.387650] hangcheck \x09RING_HEAD: 0x00000fb8 <7> [106.387735] hangcheck \x09RING_TAIL: 0x00000fb8 <7> [106.387764] hangcheck \x09RING_CTL: 0x00003000 <7> [106.387793] hangcheck \x09RING_MODE: 0x00000200 [idle] <7> [106.387830] hangcheck \x09RING_IMR: 00000000 <7> [106.387865] hangcheck \x09ACTHD: 0x00000000_13c00fb8 <7> [106.387909] hangcheck \x09BBADDR: 0x00000000_00000004 <7> [106.387947] hangcheck \x09DMA_FADDR: 0x00000000_00000000 <7> [106.387975] hangcheck \x09IPEIR: 0x00000000 <7> [106.387999] hangcheck \x09IPEHR: 0x00000000 <7> [106.388026] hangcheck \x09Execlist status: 0x00018001 00000000 <7> [106.388058] hangcheck \x09Execlist CSB read 1, write 1 [mmio:1], tasklet queued? no (enabled) <7> [106.388108] hangcheck \x09\x09ELSP[0] count=1, ring->start=00800000, rq: 72fb! [618a:72fa] prio=2 @ 4799ms: signaled <7> [106.388156] hangcheck \x09\x09ELSP[1] idle <7> [106.388182] hangcheck \x09\x09HW active? 0x1 <7> [106.388204] hangcheck \x09\x09Queue priority: 2 <7> [106.388236] hangcheck \x09\x09Q 0 [618a:72fb] prio=2 @ 4799ms: gem_exec_nop[1208]/0 <7> [106.388271] hangcheck \x09\x09Q 0 [618a:72fc] prio=0 @ 4799ms: gem_exec_nop[1208]/0 <7> [106.388307] hangcheck \x09\x09Q 0 [618a:72fd] prio=0 @ 4799ms: gem_exec_nop[1208]/0 <7> [106.388343] hangcheck \x09\x09Q 0 [618a:72fe] prio=0 @ 4799ms: gem_exec_nop[1208]/0 <7> [106.388381] hangcheck \x09\x09Q 0 [618a:72ff] prio=0 @ 4799ms: gem_exec_nop[1208]/0 <7> [106.388416] hangcheck \x09\x09Q 0 [618a:7300] prio=0 @ 4799ms: gem_exec_nop[1208]/0 <7> [106.388451] hangcheck \x09\x09Q 0 [618a:7301] prio=0 @ 4798ms: gem_exec_nop[1208]/0 <7> [106.388525] hangcheck \x09\x09...skipping 173 queued requests... <7> [106.388556] hangcheck \x09\x09Q 0 [618a:73af] prio=0 @ 4796ms: gem_exec_nop[1208]/0 <7> [106.388589] hangcheck IRQ? 0x0 (breadcrumbs? no) <7> [106.388616] hangcheck HWSP: <7> [106.388647] hangcheck [0000] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [106.388725] hangcheck * <7> [106.388761] hangcheck [0040] 00000001 40000000 00000018 40000040 00008002 40000040 00000018 40000040 <7> [106.388830] hangcheck [0060] 00000001 40000000 00000018 40000040 00000000 00000000 00000000 00000000 <7> [106.388882] hangcheck [0080] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [106.388927] hangcheck [00a0] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001 <7> [106.388972] hangcheck [00c0] 000072fb 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [106.389016] hangcheck [00e0] 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 <7> [106.389061] hangcheck * <7> [106.389087] hangcheck Idle? no <6> [106.395482] [drm] GPU HANG: ecode 11:6:0xfffffffe, reason: hang on vecs0, action: reset <7> [106.395772] [drm:i915_reset_device [i915]] resetting chip <5> [106.395952] i915 0000:00:02.0: Resetting chip for hang on vecs0 <6> [106.614200] [IGT] gem_exec_nop: exiting, ret=99
icl is already suspect due to bug 108315. In this case there was no CS event.
Last seen CI_DRM_5090_8 (2 weeks, 2 days / 511 runs ago), happened only once though.
Still only one occurrence so far. Can we close it?
*** This bug has been marked as a duplicate of bug 108315 ***
The CI Bug Log issue associated to this bug has been archived. New failures matching the above filters will not be associated to this bug anymore.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.