Forwarding this bug from Ubuntu reporter PSN:
Yet another GPU lockup, this one with GNOME Shell, and with an unusual error code:
Only one we've seen reported with this error code, and so far the user has only experienced it once, on a fresh install.
happened after closing gl-117 in 1024x768 mode while default resolution is 1366x768..
resolution failed to switch back to default and hang, had to hard reset, then im shown that gnome-shell started in fallback mode
DistroRelease: Ubuntu 11.10
Package: xserver-xorg-video-intel 2:2.15.901-1ubuntu2
ProcVersionSignature: Ubuntu 3.0.0-10.16-generic 3.0.4
Uname: Linux 3.0.0-10-generic i686
Date: Fri Sep 9 11:23:01 2011
DistUpgraded: Fresh install
DuplicateSignature: [arrandale] GPU lockup render.IPEHR: 0xff4c4c4c Ubuntu 11.10
GpuHangFrequency: This is the first time
GpuHangReproducibility: I don't know
Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 18) (prog-if 00 [VGA controller])
Subsystem: Acer Incorporated [ALI] Device [1025:0482]
InstallationMedia: Ubuntu 11.10 "Oneiric Ocelot" - Beta i386 (20110901)
MachineType: Acer Aspire 4738
ProcCmdline: /usr/bin/python /usr/share/apport/apport-gpu-error-intel.py
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.0.0-10-generic root=UUID=87efc057-537b-4925-ad54-7972f88dfef5 ro quiet splash vt.handoff=7
Title: [arrandale] GPU lockup render.IPEHR: 0xff4c4c4c
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.board.asset.tag: Base Board Asset Tag
dmi.board.version: Base Board Version
dmi.chassis.vendor: Chassis Manufacturer
dmi.chassis.version: Chassis Version
dmi.product.name: Aspire 4738
version.compiz: compiz 1:0.9.5.92+bzr2791-0ubuntu2
version.libdrm2: libdrm2 2.4.26-1ubuntu1
version.libgl1-mesa-dri: libgl1-mesa-dri 7.11-0ubuntu3
version.libgl1-mesa-dri-experimental: libgl1-mesa-dri-experimental N/A
version.libgl1-mesa-glx: libgl1-mesa-glx 7.11-0ubuntu3
version.xserver-xorg: xserver-xorg 1:7.6+7ubuntu6
version.xserver-xorg-input-evdev: xserver-xorg-input-evdev 1:2.6.0-1ubuntu13
version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:6.14.99~git20110811.g93fc084-0ubuntu1
version.xserver-xorg-video-intel: xserver-xorg-video-intel 2:2.15.901-1ubuntu2
version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:0.0.16+git20110411+8378443-1
Created attachment 51483 [details]
Created attachment 51484 [details]
Created attachment 51485 [details]
Created attachment 51486 [details]
Well, that's no batchbuffer the gpu tries to execute, that's just a pile of rgba pixels. Hence the strange IPEHR code.
No idea where these pixels are coming from. We've only seen this on snb and blamed it on semaphores not correctly syncing the batches.
Well, it was a batchbuffer. It has the cache domain to prove that the last time the CPU saw it was inactive and ready to execute...
With the split rings on SNB, it is much easier to trick the GPU into overwriting memory queued for execution on another ring. On ILK it requires the GPU to overwrite a bo that has already been flushed. One way is userspace could have used an absolute relocation, very unlikely.
I have seen such corrupt batches with very short-lived ddx bugs (e.g. marking cache domains incorrectly or not clipping drawing commands correctly) or tiling issues with pipelined fencing and map_and_fenceable. All of which do not seem to apply here.
There's something pretty odd with this error_state. The batchbuffer containing the rbgba values has read_domains = GTT | INSTRUCTION | COMMAND | SAMPLER, write_domains = 0.
The first 3 read domains are ok (and should be like that), sampler makes just no sense. We probably don't want to sample a batchbuffer, and the write to turn that bo into something worth sampling (or executing, depending upon ordering) should have invalidated the other domains. This is fishy.
SAMPLER is used for surface binding table in Mesa (or was). And the surface binding table is embedded at the tail of the batchbuffer, so yes it could legitimately be moved to the SAMPLER domain following the batchbuffer pwrite.
My current favourite hypothesis is a wild write from one of the intervening batches, and we have a mix of ddx/dri suspects.
Bug 41102 is similar. That looks like a batch buffer clobbered by BLT?
I believe this is related to:
Author: Chris Wilson <email@example.com>
Date: Wed Dec 14 13:57:23 2011 +0100
drm/i915: Only clear the GPU domains upon a successful finish
By clearing the GPU read domains before waiting upon the buffer, we run
the risk of the wait being interrupted and the domains prematurely
cleared. The next time we attempt to wait upon the buffer (after
userspace handles the signal), we believe that the buffer is idle and so
skip the wait.
There are a number of bugs across all generations which show signs of an
overly haste reuse of active buffers.
A couple of those pre-date i915_gem_object_finish_gpu(), so may be
unrelated (such as a wild write from a userspace command buffer), but
this does look like a convincing cause for most of those bugs.
Signed-off-by: Chris Wilson <firstname.lastname@example.org>
Reviewed-by: Daniel Vetter <email@example.com>
Reviewed-by: Eugeni Dodonov <firstname.lastname@example.org>
Signed-off-by: Daniel Vetter <email@example.com>
to mark dup to show relationship
*** This bug has been marked as a duplicate of bug 29046 ***
Closing resolved+duplicate as duplicate of closed+fixed.