Bug 53122

Summary: X lockups /
Product: DRI Reporter: #Paul <201208bugzillaz>
Component: DRM/RadeonAssignee: Default DRI bug account <dri-devel>
Status: RESOLVED MOVED QA Contact:
Severity: normal    
Priority: medium    
Version: XOrg git   
Hardware: x86-64 (AMD64)   
OS: Linux (All)   
Whiteboard:
i915 platform: i915 features:
Attachments:
Description Flags
X log file
none
/var/log/messages (dmesg) none

Description #Paul 2012-08-04 14:46:22 UTC
Created attachment 65122 [details]
X log file

Upon swapping video cards, I have started getting intermittent freezes of X
on my Slackware65 13.37 AMD Opteron box (kernel 3.4.4)

The display freezes up totally, including mouse pointer; occasionally there are short (eg ~10s) freezes which might be related, and usually it happens when
web browsing (mostly when there is embedded flash on the bbc.co.uk olympics pages) 

I hope the below is sufficient info; I need to swap video cards back because I can't afford such lockups ATM.


Here's the lspci lines:
0a:00.0 VGA compatible controller: ATI Technologies Inc RV710 [Radeon HD 4550] (prog-if 00 [VGA controller])
        Subsystem: Giga-byte Technology Device 21ae
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 74
        Region 0: Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Region 2: Memory at f0500000 (64-bit, non-prefetchable) [size=64K]
        Region 4: I/O ports at 4000 [size=256]
        [virtual] Expansion ROM at f0520000 [disabled] [size=128K]
        Capabilities: <access denied>
        Kernel driver in use: radeon
        Kernel modules: radeon


This appears in /var/log/messages (if I wait 20+mins, it seems to unfreeze fairly reliably)

Aug  3 17:28:11 fishpond acpid: client 2126[0:0] has disconnected
Aug  3 17:28:11 fishpond acpid: client connected from 2126[0:0]
Aug  3 17:28:11 fishpond acpid: 1 client rule loaded
Aug  3 17:48:19 fishpond kernel: [68040.346054] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 17:50:19 fishpond kernel: [68160.346048] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 17:52:19 fishpond kernel: [68280.346054] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 17:54:19 fishpond kernel: [68400.346054] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 17:56:19 fishpond kernel: [68520.346048] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 17:58:19 fishpond kernel: [68640.346053] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 18:00:19 fishpond kernel: [68760.346054] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 18:02:19 fishpond kernel: [68880.346047] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 18:04:19 fishpond kernel: [69000.346052] X               D ffffffff816065c0     0  2111   2098 0x00400004
Aug  3 18:06:19 fishpond kernel: [69120.346052] X               D ffffffff816065c0     0  2111   2098 0x00400004

And instances of this in /var/log/syslog

[69120.346044] INFO: task X:2111 blocked for more than 120 seconds.
[69120.346049] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[69120.346052] X               D ffffffff816065c0     0  2111   2098 0x00400004
[69120.346058]  ffff8801743d9b58 0000000000000082 ffff8801743d9ae8 0000000000011dc0
[69120.346062]  0000000000011dc0 ffff880179dd06d0 0000000000011dc0 ffff8801743d9fd8
[69120.346066]  ffff8801743d8000 0000000000011dc0 ffff8801743d9fd8 0000000000011dc0
[69120.346071] Call Trace:
[69120.346083]  [<ffffffff81562859>] schedule+0x29/0x70
[69120.346087]  [<ffffffff81562ade>] schedule_preempt_disabled+0xe/0x10
[69120.346091]  [<ffffffff815613b9>] __mutex_lock_slowpath+0xd9/0x150
[69120.346095]  [<ffffffff815610fb>] mutex_lock+0x2b/0x50
[69120.346125]  [<ffffffffa0222820>] radeon_bo_create+0x150/0x2a0 [radeon]
[69120.346141]  [<ffffffffa0233e6a>] radeon_gem_object_create+0x5a/0x100 [radeon]
[69120.346155]  [<ffffffffa0234254>] radeon_gem_create_ioctl+0x54/0xe0 [radeon]
[69120.346160]  [<ffffffff815610ee>] ? mutex_lock+0x1e/0x50
[69120.346174]  [<ffffffffa023470c>] ? radeon_gem_get_tiling_ioctl+0xbc/0xf0 [radeon]
[69120.346189]  [<ffffffffa013405f>] drm_ioctl+0x2cf/0x520 [drm]
[69120.346204]  [<ffffffffa0234200>] ? radeon_gem_pwrite_ioctl+0x30/0x30 [radeon]
[69120.346210]  [<ffffffff81142ad7>] do_vfs_ioctl+0x97/0x540
[69120.346213]  [<ffffffff81143011>] sys_ioctl+0x91/0xa0
[69120.346217]  [<ffffffff815640d2>] system_call_fastpath+0x16/0x1b


There is no error logged by X, but here its (but note that I run 6 simultaneaous X's on vt7-vt12). I've attached a log file
Comment 1 Alex Deucher 2012-08-04 15:33:44 UTC
Please attach your dmesg output.
Comment 2 #Paul 2012-08-05 15:41:41 UTC
(In reply to comment #1)
> Please attach your dmesg output.

Isn't that what appears in the syslog? (which I already quoted). I've got the old syslog, but I've since rebooted and dmesg now reports something else.
Comment 3 Alex Deucher 2012-08-05 23:11:37 UTC
(In reply to comment #2)
> Isn't that what appears in the syslog? (which I already quoted). I've got the
> old syslog, but I've since rebooted and dmesg now reports something else.

Please attach your full dmesg so we can see more details about your hw configuration.
Comment 4 #Paul 2012-08-06 09:07:01 UTC
Created attachment 65154 [details]
/var/log/messages (dmesg)
Comment 5 Martin Peres 2019-11-19 08:28:35 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/drm/amd/issues/291.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.