Summary: | ati radeon r100 driver fails to load on freebsd 6.2 stable | ||
---|---|---|---|
Product: | xorg | Reporter: | L. S. Colby <ls.colby> |
Component: | Driver/Radeon | Assignee: | xf86-video-ati maintainers <xorg-driver-ati> |
Status: | RESOLVED FIXED | QA Contact: | Xorg Project Team <xorg-team> |
Severity: | major | ||
Priority: | high | CC: | cheryl, jfrieben, jisakiel |
Version: | 7.3 (2007.09) | ||
Hardware: | x86 (IA32) | ||
OS: | FreeBSD | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: |
Description
L. S. Colby
2007-09-19 11:46:05 UTC
I'm assuming this is a single headed radeon? It seems to be lacking a connector table and is getting the default connector setup. I probably need to add a special case fall back for single crtc chips. Can you try again with ati git master or 6.7.193? The same happens for the current 'Fedora' development tree, see https://bugzilla.redhat.com/show_bug.cgi?id=261021 and references herein. So, this bug is not restricted to 'FreeBSD' but also afflicts 'GNU/Linux' systems. Issue persists for all 6.7.x driver versions including 6.7.194 as well as the current 'git' tree as of 2007-09-29. Hardware setup consists of an ATI Radeon AIW (ATI Technologies Inc Radeon R100 QD [Radeon 7200]) to which an HP A4576A 21" CRT is connected via the analog VGA D-SUB15 connector. This configuration never caused problems up to driver version 6.6.3. Created attachment 11819 [details]
Xorg.0.log for ATI driver version 6.7.194
Can you get a backtrace of the crash with gdb? Created attachment 11838 [details]
GDB output for ATI driver version 6.7.194
*** Bug 12296 has been marked as a duplicate of this bug. *** I should have a fix for this in the next few days (as soon as I get some time). This should be fixed, but there may be other places where the code falls down on single crtc cards (I don't actually have one to test). If you find any other problems let me know. commit: 0ca184c3c35032df39ea7ce5d2d4aba1a97b6426 Works partially for me. After adding a GIT-enabled overlay (primozic's) and recompiling libdrm and xf86-video-ati the xserver starts, seems like with the dpi changed and only at 75Hz -but that's a whole different can of worms, same as trying EXA ;), and my xorg.conf is mostly automatic detection as I reported on the duplicate https://bugs.freedesktop.org/show_bug.cgi?id=12296 . However: if after starting X I change to a VT the server hangs and leaves the screen in standby mode. That didn't happen before with 6.6.3. It doesn't hang my system fully though: when I xstart-ed I couldn't do anything but rebooting by ctrl-alt-sys-b as my screen was in standby and changing the VT didn't work. However if started by gdm it rebooted itself perfectly after a while (still in standby until the server started). As said in #12296, I'm using 2.6.22-kamikaze9 with radeonfb and xserver 1.4-r2 from portage; just libdrm and xf86-video-ati are from git (as specified in the dependencies of the ebuild; if that could be a source of problem I can pull everything from git instead with the 9999 ebuilds). I didn't rebuild nothing but the ati driver after emerging git's libdrm and I have debug desactivated, so this should still not be a confirmed problem - so I will recompile everything ASAP with it and without fomit-frame-pointer (mesa, mesa-progs, x11-drm, xorg-server, libdrm). By now the backtrace is meaningless: (II) AIGLX: Suspending AIGLX clients for VT switch disable montype: 1 (II) RADEON(0): RADEONRestoreMemMapRegisters() : (II) RADEON(0): MC_FB_LOCATION : 0xffff0000 (II) RADEON(0): MC_AGP_LOCATION : 0x003fffc0 finished PLL1 Backtrace: 0: /usr/bin/X(xf86SigHandler+0x82) [0x80d3312] Fatal server error: Caught signal 11. Server aborting (II) Mouse1-usb-0000:00:02.0-3.2/input0: Off (II) UnloadModule: "evdev" (II) Keyboard1-usb-0000:00:02.0-3.4/input0: Off (II) UnloadModule: "evdev" (II) Keyboard1-usb-0000:00:02.0-3.4/input1: Off (II) UnloadModule: "evdev" (II) AIGLX: Suspending AIGLX clients for VT switch disable montype: 1 (II) RADEON(0): RADEONRestoreMemMapRegisters() : (II) RADEON(0): MC_FB_LOCATION : 0xffff0000 (II) RADEON(0): MC_AGP_LOCATION : 0x003fffc0 finished PLL1 This can be just a matter of rebuilding so I won't panic yet :D, or perhaps another tiny change somewhere else as said. After rebuilding the 'xorg-x11-drv-ati-6.7.194-2.fc8' driver package on an otherwise up to date 'Fedora rawhide' system, the X server starts up correctly which means that the main issue has gone away. However, the X server now ignores the 1400x1050 mode entry in xorg.xonf and defaults to 1280x1024 at 85Hz which it didn't do before. After removing the 1400x1050 mode entry from xorg.conf, the X server still defaults to 1280x1024 at 85Hz but at least now, the GNOME display preferences allow the user to change the resolution to 1600x1200 at 75 Hz whereas before, only lower resolutions were present. However, the 1400x1050 mode has disappeared completely. It has to be added though that the Fedora X server has been patched, so this observation might not apply to upstream Xorg. Note that the monitor device is an HP A4576A 21" CRT for which 1400x1050 is the natural resolution both in terms of DPI and aspect ratio (4/3). When switching to a virtual console, the monitor powers off. It is still possible to switch back to vt7 but then, the X server simply restarts. It is also possible to reboot the system via ctrl-alt-del. It seems that switching to a virtual console makes the X server crash as already reported in comment #11. The relevant entries in Xorg.0.log related to the crash are: (II) AIGLX: Suspending AIGLX clients for VT switch disable montype: 1 (II) RADEON(0): RADEONRestoreMemMapRegisters() : (II) RADEON(0): MC_FB_LOCATION : 0xffff0000 (II) RADEON(0): MC_AGP_LOCATION : 0x003fffc0 finished PLL1 Backtrace: 0: /usr/bin/X(xf86SigHandler+0x81) [0x80cdf21] 1: [0x12d420] 2: /lib/libc.so.6 [0x3532a8] 3: /usr/lib/xorg/modules/drivers//radeon_drv.so(RADEONLeaveVT+0x65) [0x505315] 4: /usr/lib/xorg/modules//libxaa.so [0x5bacc7] 5: /usr/bin/X [0x80b34cd] 6: /usr/lib/xorg/modules/extensions//libglx.so [0x49df8f] 7: /usr/bin/X(xf86Wakeup+0x289) [0x80cf509] 8: /usr/bin/X(WakeupHandler+0x59) [0x808c859] 9: /usr/bin/X(WaitForSomething+0x1ae) [0x81b60ae] 10: /usr/bin/X(Dispatch+0x8d) [0x808864d] 11: /usr/bin/X(main+0x495) [0x80704a5] 12: /lib/libc.so.6(__libc_start_main+0xe0) [0x214320] 13: /usr/bin/X(FontFileCompleteXLFD+0x1f1) [0x806f791] Fatal server error: Caught signal 11. Server aborting (II) AIGLX: Suspending AIGLX clients for VT switch disable montype: 1 (II) RADEON(0): RADEONRestoreMemMapRegisters() : (II) RADEON(0): MC_FB_LOCATION : 0xffff0000 (II) RADEON(0): MC_AGP_LOCATION : 0x003fffc0 finished PLL1 (In reply to comment #12) > After rebuilding the 'xorg-x11-drv-ati-6.7.194-2.fc8' driver package on an > otherwise up to date 'Fedora rawhide' system, the X server starts up correctly > which means that the main issue has gone away. However, the X server now > ignores the 1400x1050 mode entry in xorg.xonf and defaults to 1280x1024 at 85Hz > which it didn't do before. Can you attach your xorg log? I suspect your monitor doesn't have a mode in the edid for 1400x1050. The edid's preferred mode is probably 1280x1024@85, that's why it's getting set to that by default. you can manually add the 1400x1050 mode: xrandr --newmode <1400x1050 modeline> xrandr --addmode VGA-0 <1400x1050 mode name> At the moment I only add the screen modes to the LVDS output (and even then I probably shouldn't). The problem is, with randr, which output to you want the screen modes added to? You may not want them on all outputs. You should be able to add a monitor section for each output and add the modes you want there, but I'm not sure the server adds them properly (I need to double check that). Finally, can you attach the backtrace from the new VT switch crash? As I feared rebuilding everything didn't help; I still get the crash when switching as happens to #12. However I can't get a proper backtrace; after rebuilding without fomit-frame-pointer and with debug enabled and crashing while switching, the Xorg log still only shows (II) AIGLX: Suspending AIGLX clients for VT switch disable montype: 1 (II) RADEON(0): RADEONRestoreMemMapRegisters() : (II) RADEON(0): MC_FB_LOCATION : 0xd3ffd000 (II) RADEON(0): MC_AGP_LOCATION : 0xd87fd800 finished PLL1 Backtrace: 0: /usr/bin/X(xf86SigHandler+0x7e) [0x80d179e] 1: [0xffffe420] Fatal server error: Caught signal 11. Server aborting (II) Mouse1-usb-0000:00:02.0-3.2/input0: Off (II) UnloadModule: "evdev" (II) Keyboard1-usb-0000:00:02.0-3.4/input0: Off (II) UnloadModule: "evdev" (II) Keyboard1-usb-0000:00:02.0-3.4/input1: Off (II) UnloadModule: "evdev" (II) AIGLX: Suspending AIGLX clients for VT switch disable montype: 1 (II) RADEON(0): RADEONRestoreMemMapRegisters() : (II) RADEON(0): MC_FB_LOCATION : 0xd3ffd000 (II) RADEON(0): MC_AGP_LOCATION : 0xd87fd800 finished PLL1 Trying to debug via gdb didn't work neither; when attaching to the server it hangs it with error 0xffffe410 in __kernel_vsyscall () and backtrace #0 0xffffe410 in __kernel_vsyscall () #1 0x48e7118d in ___newselect_nocancel () from /lib/libc.so.6 #2 0x081adac1 in WaitForSomething (pClientsReady=0xafcf5300) at WaitFor.c:235 #3 0x0808d3c2 in Dispatch () at dispatch.c:425 #4 0x08074c65 in main (argc=9, argv=0xafcf5834, envp=0x0) at main.c:452 I guess that is a problem of my own system, but I accept any suggestions to get a proper bt. I'll attach a full gdb log of this non-completely-related crash though, and wait for further news. Created attachment 11877 [details]
GDB bt of crash when tried to attach
(In reply to comment #14) > Trying to debug via gdb didn't work neither; when attaching to the server it > hangs it with error 0xffffe410 in __kernel_vsyscall () and backtrace Where does it say error? :) It's just the backtrace at the time gdb attaches to the process, which implicitly stops its execution. At that point, enter handle SIGUSR1 nostop to make gdb ignore the signals involved in VT switches, then continue to continue execution of the X server. Then get the backtrace once it hits the SIGSEGV.
> Where does it say error? :) It's just the backtrace at the time gdb attaches to
> the process, which implicitly stops its execution.
Ok, sorry about that :*), I'm not that used to gdb yet. After the handle and continuing, when I change to a VT I get the following gdb log:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1477245248 (LWP 13740)]
0x48eed2d0 in main_arena () from /lib/libc.so.6
(gdb) bt f
#0 0x48eed2d0 in main_arena () from /lib/libc.so.6
No symbol table info available.
#1 0x00000000 in ?? ()
No symbol table info available.
(gdb) continue
Continuing.
Program received signal SIGSEGV, Segmentation fault.
0x48eed2d0 in main_arena () from /lib/libc.so.6
(gdb) bt
#0 0x48eed2d0 in main_arena () from /lib/libc.so.6
#1 0xa7de8aff in RADEONRestore (pScrn=0x82186b8) at radeon_driver.c:5388
#2 0xa7de8ea5 in RADEONLeaveVT (scrnIndex=0, flags=0) at radeon_driver.c:5794
#3 0xa7c2fbe5 in XAALeaveVT (index=0, flags=0) at xaaInit.c:694
#4 0x080bb91a in xf86XVLeaveVT (index=0, flags=0) at xf86xv.c:1278
#5 0xa7eb510f in ?? () from /usr/lib/xorg/modules/extensions//libglx.so
#6 0x00000000 in ?? ()
The first sigsegv seems inside glibc, as I have to continue to get the xorg one. Unfortunately I don't have glibc with USE=debug; if necesary I will recompile it but I foresee major breakage, even not changing versions if I do that. Just in case, glibc is 2.6.1 with useflags (glibc-omitfp nls -debug -glibc-compat20 -hardened -multilib -profile -selinux).
As I saw something about XAA I tried with EXA instead, but the backtrace is pretty much the same, just without the XAALeaveVT line:
(gdb) bt
#0 0x48eed2d0 in main_arena () from /lib/libc.so.6
#1 0xa7dadaff in RADEONRestore (pScrn=0x82186b8) at radeon_driver.c:5388
#2 0xa7dadea5 in RADEONLeaveVT (scrnIndex=0, flags=0) at radeon_driver.c:5794
#3 0x080bb91a in xf86XVLeaveVT (index=0, flags=0) at xf86xv.c:1278
#4 0xa7e7a10f in ?? () from /usr/lib/xorg/modules/extensions//libglx.so
#5 0x00000000 in ?? ()
Created attachment 11892 [details] Xorg.0.log for ATI Radeon 7200 connected to HP A4576A 21" CRT and git radeon driver (In reply to comment #13) > Can you attach your xorg log? I suspect your monitor doesn't have a mode in > the edid for 1400x1050. The edid's preferred mode is probably 1280x1024@85, > that's why it's getting set to that by default. you can manually add the > 1400x1050 mode: > xrandr --newmode <1400x1050 modeline> > xrandr --addmode VGA-0 <1400x1050 mode name> Ok, I will try that. I'm simply surprised that a section like Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1400x1050" EndSubSection EndSection doesn't work anymore. All this EDID/xrandr magic is certainly great but when the user decides to overrule this autodetection stuff [following standard xorg.conf conventions], one would expect his choices to be honoured. Concerning the crash backtrace when switching to a virtual console, I'm on a single node network, and the Xdbg script trick devised at the FDO pages doesn't seem to work here because the X server will not even start up, so it's impossible to switch to a virtual console in order to trigger the segmentation fault. The monitor simply powers off upon start of the X server. Any hint howto proceed any further? The Xdgb script is: ----------------------------------------------------------------- #!/bin/sh #GDB #XSERVER ARGS=$* PID=$$ test -z "$GDB" && GDB=gdb test -z "$XSERVER" && XSERVER=/usr/bin/Xorg cat > /tmp/.dbgfile.$PID << HERE file $XSERVER set args $ARGS handle SIGUSR1 nostop handle SIGUSR2 nostop handle SIGPIPE nostop run module bt cont quit HERE $GDB -silent < /tmp/.dbgfile.$PID &> /tmp/gdb_log.$PID rm -f /tmp/.dbgfile.$PID echo "Log written to: /tmp/gdb_log.$PID" ----------------------------------------------------------------- I fixed some more issues tonight. let me know if the latest from ati git helps. (In reply to comment #18) > Ok, I will try that. I'm simply surprised that a section like > > Section "Screen" > Identifier "Screen0" > Device "Videocard0" > DefaultDepth 24 > SubSection "Display" > Viewport 0 0 > Depth 24 > Modes "1400x1050" > EndSubSection > EndSection > > doesn't work anymore. All this EDID/xrandr magic is certainly great but when > the user decides to overrule this autodetection stuff [following standard > xorg.conf conventions], one would expect his choices to be honoured. thus is the price of progress. In the randr world the xorg.conf changes a bit as the screen section no longer maps to a single monitor. While on a card like yours with a single crtc and output it's easy to wonder why the mode listed there doesn't get added, but consider a card with a local LVDS panel, a DVI port, a VGA port, and a TV out port. Which output should be mode get added to? all of them? what if your laptop panel only supports 1024x768? Also when you say 1400x1050 what mode exactly do you mean 1400x1050@60Hz? 1400x10505@85Hz? 72Hz? With randr you can assign hardcoded/overridden monitor sections to each output, e.g., Section "Device" Identifier "My Radeon" Driver "ati" Option "Monitor-VGA-0" "My Monitor" EndSection Section "Monitor" Identifier "My Monitor" ... EndSection within the monitor section you can specify new modelines you want to use as well as specify the orientation of the monitor in relation to the other monitors driven by the same card (in the case of dualhead cards). Unfortunately, I think there are still some issues to be worked out with this method in the xserver. See this page for more: http://www.intellinuxgraphics.org/dualhead.html (In reply to comment #19) > I fixed some more issues tonight. let me know if the latest from ati git > helps. Most definitely: I have rebuilt 'the xorg-x11-drv-ati-6.7.194-2.fc8' driver package using a tar-ball produced from the current git tree, and now I'm actually able to switch between X and virtual consoles at will without crash. Great work! The issue thus seems to be solved, I now have to get familiar with the new Xrandr stuff though .. (In reply to comment #21) > Most definitely: I have rebuilt 'the xorg-x11-drv-ati-6.7.194-2.fc8' driver > package using a tar-ball produced from the current git tree, and now I'm > actually able to switch between X and virtual consoles at will without crash. > Great work! Excellent! I'll go ahead and close this bug then. > The issue thus seems to be solved, I now have to get familiar with the new > Xrandr stuff though .. There may be some issues there as well. Let me know if you run into anything, I need to review that stuff better myself. I think the xserver may need some fixes. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.