Bug 19267 - EXAVsync kills Xorg with Radeon 9550
Summary: EXAVsync kills Xorg with Radeon 9550
Status: RESOLVED FIXED
Alias: None
Product: xorg
Classification: Unclassified
Component: Driver/Radeon (show other bugs)
Version: git
Hardware: x86 (IA32) Linux (All)
: medium normal
Assignee: xf86-video-ati maintainers
QA Contact: Xorg Project Team
URL:
Whiteboard:
Keywords:
: 20351 (view as bug list)
Depends on:
Blocks:
 
Reported: 2008-12-23 16:46 UTC by Chris Rankin
Modified: 2009-10-21 13:40 UTC (History)
2 users (show)

See Also:
i915 platform:
i915 features:


Attachments
Xorg log file after the process has frozen. (33.69 KB, text/plain)
2008-12-23 16:50 UTC, Chris Rankin
no flags Details

Description Chris Rankin 2008-12-23 16:46:22 UTC
I have a Radeon 9550 in a dual P4 Xeon machine (HT enabled, to provide 4 logical CPUs). I am using Fedora 9 with the radeon xorg driver from git:

Section "Device"
	Identifier  "Radeon 9550"
	Driver      "radeon"
	Option      "AGPMode" "8"
	Option      "GARTSize" "128"
	Option      "AccelMethod" "EXA"
	Option      "AccelDFS" "on"
	Option      "RenderAccel" "on"
	Option      "EnablePageFlip" "on"
	Option      "MigrationHeuristic" "greedy"
EndSection

This section works fine, but if I add:

	Option      "EXAVSync" "on"

then the Xorg server immediately starts spinning on a CPU (100% usage) when I try to login, and I am forced to ssh in from a remote machine in order to reboot. (Killing the Xorg process makes the machine responsive again, but doesn't seem to clear Xorg's process slot).

This is my graphics card's PCI information:

01:00.0 VGA compatible controller: ATI Technologies Inc RV350 AS [Radeon 9550] (prog-if 00 [VGA controller])
	Subsystem: C.P. Technology Co. Ltd Unknown device 2084
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64 (2000ns min), Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 16
	Region 0: Memory at e0000000 (32-bit, prefetchable) [size=256M]
	Region 1: I/O ports at ec00 [size=256]
	Region 2: Memory at ff8f0000 (32-bit, non-prefetchable) [size=64K]
	Expansion ROM at ff800000 [disabled] [size=128K]
	Capabilities: [58] AGP version 3.0
		Status: RQ=256 Iso- ArqSz=0 Cal=0 SBA+ ITACoh- GART64- HTrans- 64bit- FW+ AGP3+ Rate=x4,x8
		Command: RQ=32 ArqSz=2 Cal=0 SBA+ AGP+ GART64- 64bit- FW- Rate=x8
	Capabilities: [50] Power Management version 2
		Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 PME-Enable- DSel=0 DScale=0 PME-

01:00.1 Display controller: ATI Technologies Inc RV350 AS [Radeon 9550] (Secondary)
	Subsystem: C.P. Technology Co. Ltd Unknown device 2085
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64 (2000ns min), Cache Line Size: 64 bytes
	Region 0: Memory at d0000000 (32-bit, prefetchable) [size=256M]
	Region 1: Memory at ff8e0000 (32-bit, non-prefetchable) [size=64K]
	Capabilities: [50] Power Management version 2
		Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Comment 1 Chris Rankin 2008-12-23 16:50:58 UTC
Created attachment 21456 [details]
Xorg log file after the process has frozen.

This log file dates from a few weeks ago, almost certainly the first time that EXAVsync support hit the git repository. But I reproduced the problem with the RC1 driver on Monday 22nd December as well.
Comment 2 Michel Dänzer 2008-12-24 00:25:15 UTC
Driver issue.
Comment 3 Alex Deucher 2009-02-27 06:07:56 UTC
*** Bug 20351 has been marked as a duplicate of this bug. ***
Comment 4 Alex Deucher 2009-10-21 13:40:47 UTC
This should be fixed.  Please reopen if you are still having problems.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.