I have an old PC with biw1b motherboard (i810 chipset) running 2.6.26-2-486 Linux kernel. When playing crack-attack game, X _sometimes_ (i.e., it's hard to reproduce this behaviour) freezes: first, it stops reacting on kbd & mice then (after 3 sec. approx.) screen becomes blank, nothing except ctrl-alt-del works (in X), restarting gdm (through network) does not help. Both Xorg & crack-attack use /dev/dri/card0 host:~# lsof | grep dri Xorg 2336 root 8u CHR 226,0 7091 /dev/dri/card0 Xorg 2336 root 10u CHR 226,0 7091 /dev/dri/card0 (why twice?) crack-att 3131 user 4u CHR 226,0 7091 /dev/dri/card0 So i suspect race condition somewhere in drm.ko. Dmesg shows this (in a ssh session): [ 4326.132052] [drm:i810_wait_ring] *ERROR* space: 64588 wanted 65528 [ 4326.132074] [drm:i810_wait_ring] *ERROR* lockup ... repetead several times. This happens in linux/drivers/char/drm/i810_dma.c:i810_wait_ring() (note end = jiffies + (HZ * 3) there, those three seconds) Besides, each time Xorg starts, linux/drivers/char/drm/drm.ko complains in it's 'release' method like this: [ 271.220061] [drm:drm_release] *ERROR* reclaim_buffers_locked() deadlock. Please rework this [ 271.220078] driver to use reclaim_buffers_idlelocked() instead. [ 271.220087] I will go on reclaiming the buffers anyway.
Any chance you could try newer bits? 2.6.26 is pretty ancient. Not that anyone really works on i810 anymore, but it's worth a try.
(In reply to comment #1) > 2.6.26 is pretty ancient. Really? It comes with Debian distro, which I installed only 3 months ago (last time). > Not that anyone really works on i810 anymore, but it's worth a try. Well...the box is of course really ancient. But, as I could see, linux/drivers/char/drm/drm_fops.c is not hardware specific. Which driver does drm_release() complain about ("please rework this driver" in the source)? About itself or about i810?
I tried 2.6.33.2 kernel (Latest Stable from kernel.org), same story. There is nothing suprising here since i810 DRM driver is exactly the same in this version. So it's likely to be a bug not in the DRM core but in i810 DRM driver. Also I found how to reproduce 'lockup' without playing crack-attack: it occurs every time gdm is stopped. dmesg output after '/etc/init.d/gdm stop' issued on tty1: -- [ 952.132097] [drm:drm_release] *ERROR* reclaim_buffers_locked() deadlock. Please rework this [ 952.132113] driver to use reclaim_buffers_idlelocked() instead. [ 952.132121] I will go on reclaiming the buffers anyway. [ 955.136045] [drm:i810_wait_ring] *ERROR* space: 65520 wanted 65528 [ 955.136136] [drm:i810_wait_ring] *ERROR* lockup [ 955.148005] [drm] DMA Cleanup -- Xorg log: -- (II) AIGLX: Suspending AIGLX clients for VT switch // switching from tty7 to tty1 (II) intel(0): [drm] removed 1 reserved context for kernel (II) intel(0): [drm] unmapping 8192 bytes of SAREA 0xd8af1000 at 0xb7b61000 (II) intel(0): [drm] Closed DRM master. (WW) intel(0): xf86UnMapVidMem: cannot find region for [0xb39fa000,0x3000000] -- Well, it's clear that nobody has any desire to mess about very-very old hardware... %) If there was complete doc on writing DRM drivers I would try to fix the bug myself, but there is none, unfortunately.
Yeah you're currently stuck with reading source; the git logs may have some useful info on converting things...
This issue is affecting a hardware component which is not being actively worked on anymore. Moving the assignee to the dri-devel list as contact, to give this issue a better coverage.
I've looked a few times into the i810 drm kernel driver to clean up various things all accross drm, and the code in there is horrible. It is so horrible that you stop caring about race conditions you noticed after reading just a few functions of code ... :( Given that, I think the only way to fix this disaster is to rewrite the i810 support as a new, clean kernel modeset driver. That needs a volunteer with too much time. Hence I'll close this as wontfix. Sorry.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.