Bug 97789 - [SKL] GPU HANG: ecode 9:0:0x85dffffb, in blender [2329], reason: Engine(s) hung, action: reset
Summary: [SKL] GPU HANG: ecode 9:0:0x85dffffb, in blender [2329], reason: Engine(s) hu...
Status: RESOLVED INVALID
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/DRI/i965 (show other bugs)
Version: 12.0
Hardware: x86-64 (AMD64) Linux (All)
: medium major
Assignee: Ian Romanick
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-09-13 16:06 UTC by marco.grimaldi
Modified: 2017-02-10 22:39 UTC (History)
1 user (show)

See Also:
i915 platform: SKL
i915 features: GPU hang


Attachments
cat /sys/class/drm/card0/error | gzip > error_blender.gz (75.11 KB, application/octet-stream)
2016-09-13 16:07 UTC, marco.grimaldi
Details

Description marco.grimaldi 2016-09-13 16:06:50 UTC
Gpu freezes for about 5 seconds while working on Blender (mesh creation, editing, low poly)
Application does not crashes 
Errors in dmesg (see below)

i6600k
i530

Blender 2.77a

uname -a
Linux moby 4.7.3-1-MANJARO #1 SMP PREEMPT Wed Sep 7 20:03:03 UTC 2016 x86_64 GNU/Linux

Xorg 1.18.4 (stock manjaro)
Mesa 12.0.2 (stock manjaro)

dmesg:
[12030.711106] [drm] stuck on render ring
[12030.711547] [drm] GPU HANG: ecode 9:0:0x85dffffb, in blender [2329], reason: Engine(s) hung, action: reset
[12030.711551] [drm] GPU hangs can indicate a bug anywhere in the entire gfx stack, including userspace.
[12030.711553] [drm] Please file a _new_ bug report on bugs.freedesktop.org against DRI -> DRM/Intel
[12030.711555] [drm] drm/i915 developers can then reassign to the right component if it's not a kernel issue.
[12030.711558] [drm] The gpu crash dump is required to analyze gpu hangs, so please always attach it.
[12030.711560] [drm] GPU crash dump saved to /sys/class/drm/card0/error
[12030.714518] drm/i915: Resetting chip after gpu hang
Comment 1 marco.grimaldi 2016-09-13 16:07:28 UTC
Created attachment 126485 [details]
cat /sys/class/drm/card0/error | gzip > error_blender.gz
Comment 2 yann 2016-09-13 16:51:21 UTC
Assigning to Mesa product (please let me know if I am mistaken with this GPU Hang).

From this error dump, hung is happening in render ring batch with active head at 0xfdd77dfc, with 0x7a000004 (PIPE_CONTROL) as IPEHR.

Batch extract (around 0xfdd77dfc):

0xfdd77dd4:      0x784e0002: 3D UNKNOWN: 3d_965 opcode = 0x784e
0xfdd77dd8:      0x00000000: MI_NOOP
0xfdd77ddc:      0x00000000: MI_NOOP
0xfdd77de0:      0x00000000: MI_NOOP
Bad count in PIPE_CONTROL
0xfdd77de4:      0x7a000004: PIPE_CONTROL: no write, no depth stall, no RC write flush, no inst flush
0xfdd77de8:      0x00101001:    destination address
0xfdd77dec:      0x00000000:    immediate dword low
0xfdd77df0:      0x00000000:    immediate dword high
Bad count in PIPE_CONTROL
0xfdd77dfc:      0x7a000004: PIPE_CONTROL: no write, no depth stall, no RC write flush, no inst flush
0xfdd77e00:      0x00000408:    destination address
0xfdd77e04:      0x00000000:    immediate dword low
0xfdd77e08:      0x00000000:    immediate dword high
Bad length 8 in 3DSTATE_URB, expected 3-3
0xfdd77e14:      0x78050006: 3DSTATE_URB
0xfdd77e18:      0x304c1dff:    VS entries 7679, alloc size 77 (1024bit row)
0xfdd77e1c:      0xfcb64000:    GS entries 576, alloc size 1 (1024bit row)
0xfdd77e34:      0x78070003: 3D UNKNOWN: 3d_965 opcode = 0x7807
Comment 3 yann 2016-11-04 14:54:27 UTC
Please test a new version of Mesa (12 or 13) and mark as REOPENED
if you can reproduce and RESOLVED/* if you cannot reproduce.

If you can reproduce, please capture and upload an apitrace (https://github.com/apitrace/apitrace) so that we can easily 
reproduce as well.
Comment 4 Annie 2017-02-10 22:39:20 UTC
Dear Reporter,

This Mesa bug has been in the "NEEDINFO" status for over 60 days. I am closing this bug based on lack of response but feel free to reopen if resolution is still needed. Please ensure you're supplying the correct information as requested.

Thank you.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.