Summary: | Segfault in Xorg after RADEON(0): Failed to make 28944x66x32bpp GBM bo | ||
---|---|---|---|
Product: | xorg | Reporter: | Magnus Holmgren <holmgren> |
Component: | Server/Acceleration/glamor | Assignee: | Xorg Project Team <xorg-team> |
Status: | RESOLVED MOVED | QA Contact: | Xorg Project Team <xorg-team> |
Severity: | normal | ||
Priority: | medium | CC: | alpha0x89, darkdefende, sa |
Version: | 7.7 (2012.06) | ||
Hardware: | x86-64 (AMD64) | ||
OS: | Linux (All) | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: |
Created attachment 124337 [details]
Log from Unity3D, in case it contains any clues.
Can you get a backtrace of the crash with gdb, with debugging symbols available for /usr/lib/xorg/Xorg and /usr/lib/xorg/modules/libglamoregl.so ? See https://www.x.org/wiki/Development/Documentation/ServerDebugging/ for some background information. Created attachment 124360 [details]
Full backtrace from other crash
Created attachment 124361 [details]
Corresponding Xorg log from other crash
Note that when X didn't crash, there were lots of "DRI2SwapBuffers: drawable has no back or front?" in the log.
Like so? I should have had installed the core debugging symbols already, but apparently you don't get much info in the log anyway. Not sure if this is really Radeon specific, but I thought I had crashes before under similar circumstances, where radeon_drv.so was mentioned in the backtrace. Does it work better with DRI3 enabled, by any chance? (In reply to Michel Dänzer from comment #6) > Does it work better with DRI3 enabled, by any chance? Well... with Option "DRI" 3", KSP.x86_64 -force-glcore _almost_ always terminates right away with this in the log: [xcb] Unknown sequence number while processing queue [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called [xcb] Aborting, sorry about that. KSP.x86_64: ../../src/xcb_io.c:274: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed. Thread 1 (Thread 0x7f9f76ec4740 (LWP 5471)): #0 0x00007f9f768def8d in read () at ../sysdeps/unix/syscall-template.S:84 #1 0x00007f9f71c358bd in ?? () from /home/magnus/.local/share/Steam/SteamApps/common/Kerbal Space Program/KSP_Data/Mono/x86_64/libmono.so #2 <signal handler called> #3 0x00007f9f750f8458 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 #4 0x00007f9f750f98da in __GI_abort () at abort.c:89 #5 0x00007f9f750f1387 in __assert_fail_base (fmt=<optimized out>, assertion=assertion@entry=0x7f9f761c8f20 "!xcb_xlib_threads_sequence_lost", file=file@entry=0x7f9f761c8d6b "../../src/xcb_io.c", line=line@entry=274, function=function@entry=0x7f9f761c9226 "poll_for_event") at assert.c:92 #6 0x00007f9f750f1432 in __GI___assert_fail (assertion=0x7f9f761c8f20 "!xcb_xlib_threads_sequence_lost", file=0x7f9f761c8d6b "../../src/xcb_io.c", line=274, function=0x7f9f761c9226 "poll_for_event") at assert.c:101 #7 0x00007f9f76155a59 in ?? () from /usr/lib/x86_64-linux-gnu/libX11.so.6 #8 0x00007f9f76155b0b in ?? () from /usr/lib/x86_64-linux-gnu/libX11.so.6 #9 0x00007f9f76155e1d in _XEventsQueued () from /usr/lib/x86_64-linux-gnu/libX11.so.6 #10 0x00007f9f76158b95 in _XGetRequest () from /usr/lib/x86_64-linux-gnu/libX11.so.6 #11 0x00007f9f76158caf in ?? () from /usr/lib/x86_64-linux-gnu/libX11.so.6 #12 0x00007f9f76158481 in _XError () from /usr/lib/x86_64-linux-gnu/libX11.so.6 #13 0x00007f9f76472702 in ?? () from /usr/lib/x86_64-linux-gnu/libGL.so.1 #14 0x00007f9f7646e9f4 in ?? () from /usr/lib/x86_64-linux-gnu/libGL.so.1 I checked again that the above doesn't happen with Option "DRI" "2". It also does not happen with -force-glcore40 or -force-glcore41 instead of -force-glcore, with or without DRI3. And although the log says "Creating OpenGL 4.1 graphics device" in all cases, -force-glcore40 seems to work more often, maybe even most of the time. (In reply to Magnus Holmgren from comment #7) > [xcb] Unknown sequence number while processing queue > [xcb] Most likely this is a multi-threaded client and XInitThreads has not > been called This looks like a KSP bug. (In reply to Michel Dänzer from comment #8) > (In reply to Magnus Holmgren from comment #7) > > [xcb] Unknown sequence number while processing queue > > [xcb] Most likely this is a multi-threaded client and XInitThreads has not > > been called > > This looks like a KSP bug. I think it's a bug with Unity (the game engine they are using). I have three other unity games that have the same problem. (Darkwood, Satellite Reign and wasteland2: directors cut) However Satellite Reign has a linux readme that states the following: > try command line option -force-opengl to force compatability with older > drivers, or -force-glcore for latest. With -force-opengl, Satellite Reign starts normaly. But with -force-glcore it exits (crashes ) with the xcb error. With Satellite Reign there is also a popup window where you can select the new unity3d opengl core render or the old that they state are for older GPUs and "safe". I've had this problem for quite a while now (over half a year with Darkwood as it were the first one to upgrade to the new unity3d version). If I read into the Satellite Reign comment, it seems like the new opengl core render probably works on Nvidia. I don't know if it works on AMDGPU-pro... I'm using a 290x with the radeon driver from git btw. So I would suspect that the opengl core render should work on that. Michel, maybe you can try to contact the unity3d guys and help them fix this problem? I'm guessing that if a actual developer of the linux drivers would contact them it would maybe be fixed sooner rather then later. I forgot to mention that the -force-glcore41 workaround works for me too. Well, attempting to make a 28944 pixel wide pixmap is obviously a bug in the application or Unity3D, but I don't think it should be able to cause X to crash anyway. I get the same kind of error on a current openSUSE Tumbleweed: (EE) RADEON(0): Failed to make 9893x308x32bpp GBM bo with backtrace messages about libglamoregl.so as in OP's X.Org log. It happens quite often, but I'm not sure which application causes the error. Created attachment 139687 [details] backtrace with coredumpctl gdb for "Failed to make ... GBM bo" (In reply to Theo from comment #12) > I get the same kind of error on a current openSUSE Tumbleweed > > (EE) RADEON(0): Failed to make 9893x308x32bpp GBM bo It happened again. I attached a backtrace from 'coredumpctl gdb /usr/bin/Xorg' in case this is of any use. See also attachment 139545 [details] (Xorg log) for information about my system. I don't suppose https://cgit.freedesktop.org/xorg/driver/xf86-video-ati/commit/?id=3dcfce8d0f495d09d7836caf98ef30d625b78a13 helps by any chance? If not, can you attach gdb to the Xorg process, set a breakpoint in glamor_make_pixmap_exportable where it prints the "Failed to make" message, then when the breakpoint triggers, run "bt full" and attach its output here? Created attachment 139820 [details] backtrace from within glamor_make_pixmap_exportable (In reply to Michel Dänzer from comment #14) > I don't suppose > https://cgit.freedesktop.org/xorg/driver/xf86-video-ati/commit/ > ?id=3dcfce8d0f495d09d7836caf98ef30d625b78a13 helps by any chance? It doesn't. "Failed to make ... GBM bo" crashes still happen with this patch. > If not, can you attach gdb to the Xorg process, set a breakpoint in > glamor_make_pixmap_exportable where it prints the "Failed to make" message, > then when the breakpoint triggers, run "bt full" and attach its output here? I followed [1] to set a breakpoint at [2] with these commands: set confirm off set breakpoint pending on file $XSERVER set args $ARGS handle SIGUSR1 nostop handle SIGUSR2 nostop handle SIGPIPE nostop break glamor_egl.c:372 run bt full cont bt full cont detach quit and then waited for the next crash. The resulting backtrace is attached. [1] https://www.x.org/wiki/Development/Documentation/ServerDebugging/#index2h3 [2] https://cgit.freedesktop.org/xorg/xserver/tree/glamor/glamor_egl.c?h=server-1.19-branch#n372 Any chance you can try xserver 1.19? I've been unable to trigger a crash by artificially inducing the failure resulting in the "Failed to make ... GBM bo" message. Anyway, looks like a glamor issue so far. -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/xserver/issues/86. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
Created attachment 124336 [details] Xorg log When starting Kerbal Space Program 1.1.2, which is built on Unity3D 5, on Debian testing, quite often KSP/Unity becomes confused and resizes the window to 28944x66 or similar. If I don't kill it, X typically crashes. I don't know whose fault it is that Unity becomes confused, but I'd prefer if KSP crashes instead of X. If you're familiar with Unity3D and/or KSP, KSP is launched with -force-glcore. Without -force-glcore this problem doesn't occur, but many UI elements become invisible, making the game unplayable.