Tried to starting supertuxkart 0.8.1 game with current 10.2-rc2 on radeonsi or swrast too.. and xserver immiditely segfault :). Bug is present in 10.2-rc2 and current 10.3-devel, bisecting says bug firstly appear with this commit: tgsi: add tgsi_exec support for new bit manipulation opcodes http://cgit.freedesktop.org/mesa/mes...eac93665bb859c
I'm guessing you meant http://cgit.freedesktop.org/mesa/mesa/commit/?id=1db993f2fe1c2b43a9658efba6eac93665bb859c Can you double-check the results of your bisect? This commit adds support for some new opcodes that only get used with ARB_gpu_shader5 is enabled (which it isn't for any drivers right now). [And it works fine with piglit tests that exercise the behaviour.]
Created attachment 98926 [details] log Yes i doublechecked it, 1db993f2fe1c2b43a9658efba6eac93665bb859c introduce this bug, but ab4927f3e04918fd8a53c2d91be4dfc65fe9782d works :). Other games works, but supertuxkart 0.8.1 crashed xserver with that... EE from Xorg.0.log attached.
(In reply to comment #2) > Created attachment 98926 [details] > log > > > Yes i doublechecked it, 1db993f2fe1c2b43a9658efba6eac93665bb859c introduce > this bug, but ab4927f3e04918fd8a53c2d91be4dfc65fe9782d works :). > > Other games works, but supertuxkart 0.8.1 crashed xserver with that... EE > from Xorg.0.log attached. That's... supremely odd. The crash is indeed somewhere in mesa. Would you mind building it with --enable-debug and not stripping the resulting .so? That way we might be able to get some more information as to exactly where it's dying. I don't see how this commit can possibly matter, so I'm actually suspecting that there's some unrelated memory corruption going on. But more than happy to be proven wrong :)
Builded it with --enable-debug, but get the same message in Xorg.0.log. Sounds odd but who knows it just reporducable here, Athlon 5350 and Debian Sid 32bit, maybe someone can reproduce it too, this is the game: http://sourceforge.net/projects/supertuxkart/files/SuperTuxKart/0.8.1/supertuxkart-0.8.1-linux-glibc2.7-i386.tar.bz2/download
Just tried 64bit variant and that works, so cannot reproduce it on 64bit Debian, using 64bit supertuxkart binary... seems like some kind of 32bit only issue :).
I can reproduce this issue on Evergreen. The information provided by "smoki" seems correct. The 64-bit game provided by Fedora 19 works fine with current master. But the 32-bit binary provided by the sourceforge package works with git-ab4927f3e049 but not anymore with git-1db993f2fe. With the later, my screen is blinking with many white squared dot on the screen. I hear the sound game that said. Here is the log trace when I run the game with this last commit : $ MESA_DEBUG=1 LIBGL_DEBUG=1 ./run_game.sh bin/supertuxkart: /lib/liblber-2.4.so.2: no version information available (required by ./bin/libcurl-gnutls.so.4) bin/supertuxkart: /lib/libldap_r-2.4.so.2: no version information available (required by ./bin/libcurl-gnutls.so.4) Irrlicht Engine version 1.8.0 Linux 3.13.9-100.fc19.x86_64 #1 SMP Fri Apr 4 00:51:59 UTC 2014 x86_64 [info ] FileManager: Data files will be fetched from: '' [info ] FileManager: User directory is '/home/benjamin/.config/supertuxkart/'. [info ] FileManager: Addons files will be stored in '/home/benjamin/.local/share/supertuxkart/addons/'. [info ] FileManager: Screenshots will be stored in '/home/benjamin/.cache/supertuxkart/screenshots/'. [debug ] translation: Env var LANGUAGE = 'fr_FR.utf8'. [debug ] translation: Language 'French (France)'. Adding language fallback fr libGL error: dlopen /usr/lib/dri/r600_dri.so failed (./bin/libgcc_s.so.1: version `GCC_4.7.0' not found (required by /usr/lib/dri/r600_dri.so)) libGL error: unable to load driver: r600_dri.so libGL error: driver pointer missing libGL error: failed to load driver: r600 libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast [warn ] irr_driver: Too old GPU; using the fixed pipeline. [...]
Hum... in fact Steam games doesn't work anymore too, eg. Left 4 Dead 2 or Team Fortress 2. The problem is the same from the log (unable to load the driver).
(In reply to comment #6) > I can reproduce this issue on Evergreen. > The information provided by "smoki" seems correct. > > The 64-bit game provided by Fedora 19 works fine with current master. > But the 32-bit binary provided by the sourceforge package works with > git-ab4927f3e049 but not anymore with git-1db993f2fe. > > libGL error: dlopen /usr/lib/dri/r600_dri.so failed (./bin/libgcc_s.so.1: > version `GCC_4.7.0' not found (required by /usr/lib/dri/r600_dri.so)) This sounds like the "i'm building with a gcc that's not supported by the steam runtime" issue, which is probably separate from the supertuxkart issue... although maybe not if it also includes some sort of runtime thing. Try it with STEAM_RUNTIME=0 or move that libgcc_s.so.1 file out of the way (I hear that works as a workaround, haven't actually run into those issues myself.) Although how any of this causes _X_ to crash vs the client app... or how my seemingly innocuous commit causes any of this... definitely beyond me =/
Yes it's the same issue. Delete 'libgcc_s.so.1' from the bin folder in supertuxkart to make it use the system one. For me it also reliable crashes X if I use the 'libgcc_s.so.1' supertuxkart provides. Btw. I use this script to start steam to make it use the system libs if they exist and fall back to steam-runtime if they don't exist (you have to adjust the paths, but the general idea should be obvious) #!/bin/sh mySTEAM_ROOT="/multimedia/Games/Steam" mySTEAM_PATH="$mySTEAM_ROOT/ubuntu12_32" mySTEAM_RUNTIME="$mySTEAM_PATH/steam-runtime" origLD_LIBRARY_PATH=$LD_LIBRARY_PATH origPATH=$PATH export STEAM_RUNTIME=0 export LD_LIBRARY_PATH="$LD_LIBRARY_PATH;/lib;/lib64;$mySTEAM_PATH/override;$mySTEAM_PATH;$mySTEAM_RUNTIME/i386/lib/i386-linux-gnu:$mySTEAM_RUNTIME/i386/lib:$mySTEAM_RUNTIME/i386/usr/lib/i386-linux-gnu:$mySTEAM_RUNTIME/i386/usr/lib:$mySTEAM_RUNTIME/amd64/lib/x86_64-linux-gnu:$mySTEAM_RUNTIME/amd64/lib:$mySTEAM_RUNTIME/amd64/usr/lib/x86_64-linux-gnu:$mySTEAM_RUNTIME/amd64/usr/lib" export PATH="$PATH;$mySTEAM_RUNTIME/amd64/usr/bin;$mySTEAM_RUNTIME/amd64/usr/sbin" $mySTEAM_ROOT/steam.sh $* export LD_LIBRARY_PATH=$origLD_LIBRARY_PATH export PATH=$origPATH
Ilia indeed it is odd, but it does cause xserver to crash :). I investigated this further and both 64bit and 32bit version shiped in Debian does not have this bug ;). It happens with that 32bit 0.8.1 game downloaded from sourceforge :D. That game package have some shiped libraries and actually one of them libgcc_s.so.1 together with this commit triggers this bug ;). How your change corelate with that shipped libgcc_s.so.1 is also beyond me :D.
(In reply to comment #6) > libGL error: dlopen /usr/lib/dri/r600_dri.so failed (./bin/libgcc_s.so.1: > version `GCC_4.7.0' not found (required by /usr/lib/dri/r600_dri.so)) > libGL error: unable to load driver: r600_dri.so r600_dri.so fails to load because of the game's copy of libgcc_s.so.1. Remove that to avoid this. > libGL error: dlopen /usr/lib/dri/swrast_dri.so failed > (/usr/lib/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) > libGL error: unable to load driver: swrast_dri.so swrast_dri.so fails to load as well, because apparently it was built without --enable-glx-tls. The result is probably GLX indirect rendering, and the crash is probably related to that. We'll need to see a backtrace from gdb, but note that GLX indirect rendering is not expected to be stable with glamor at this point unless glamor is from xserver Git. It may just be coincidence that it happens to work without Ilia's commit but crashes with it.
(In reply to comment #11) > (In reply to comment #6) > > libGL error: dlopen /usr/lib/dri/r600_dri.so failed (./bin/libgcc_s.so.1: > > version `GCC_4.7.0' not found (required by /usr/lib/dri/r600_dri.so)) > > libGL error: unable to load driver: r600_dri.so > > r600_dri.so fails to load because of the game's copy of libgcc_s.so.1. > Remove that to avoid this. Indeed, it's a good workaround. But why this commit spotted the issue? I guess the Steam's libgcc_s.so was already here before.
I tried to exclude some cases and this clearly happens only when micro_imsb case is there :). So i guess this math logic util_last_bit_signed is wrong and causes it: http://cgit.freedesktop.org/mesa/mesa/commit/?id=ab4927f3e04918fd8a53c2d91be4dfc65fe9782d
(In reply to comment #13) > I tried to exclude some cases and this clearly happens only when micro_imsb > case is there :). So i guess this math logic util_last_bit_signed is wrong > and causes it: > > http://cgit.freedesktop.org/mesa/mesa/commit/ > ?id=ab4927f3e04918fd8a53c2d91be4dfc65fe9782d Put a print in that code. It should never get called. Nothing can even generate the MSB tgsi opcode without forcing ARB_gpu_shader5 to be enabled. If it's getting called, that means that there is memory corruption going on. In case it's not clear, my commit is not at fault. It just moves some code around and makes the memory corruption that's going on due to mismatched libgcc's go from unnoticed to noticed.
I think i understand... OK then if this cannot be fixed here, then users must remove all those shipped gcc libs. I will marked this RESOLVED/NOTABUG.
Program received signal SIGSEGV, Segmentation fault. _mesa_GenProgramPipelines (n=<optimized out>, pipelines=<optimized out>) at main/pipelineobj.c:531 531 pipelines[i] = first + i; #0 _mesa_GenProgramPipelines (n=<optimized out>, pipelines=<optimized out>) at main/pipelineobj.c:531 obj = 0x1cd3988 name = 1 ctx = 0x1ef2950 first = <optimized out> i = <optimized out> #1 0x00007f7382139b2e in ?? () from /usr/lib64/xorg/modules/extensions/libglx.so No symbol table info available. #2 0x00007f738213bf68 in ?? () from /usr/lib64/xorg/modules/extensions/libglx.so No symbol table info available. #3 0x000000000043cc3e in ?? () No symbol table info available. #4 0x000000000042c1ba in ?? () No symbol table info available. #5 0x00007f73859adbe5 in __libc_start_main () from /lib64/libc.so.6 No symbol table info available. #6 0x000000000042c501 in _start () No symbol table info available. This is what I get when X crashes.
a little bit more detailed (btw. this is with r600g, no glamor involved). So while the original problem is that neither r600 nor swrast can be loaded this still shouldn't crash X: Program received signal SIGSEGV, Segmentation fault. 0x00007f2e4c6c87a5 in _mesa_GenProgramPipelines (n=3553, pipelines=0x7f2e4d8a2460 <Render_dispatch_tree>) at main/pipelineobj.c:531 531 pipelines[i] = first + i; #0 0x00007f2e4c6c87a5 in _mesa_GenProgramPipelines (n=3553, pipelines=0x7f2e4d8a2460 <Render_dispatch_tree>) at main/pipelineobj.c:531 obj = 0x20d9f88 name = 1 ctx = 0x21dc3f0 first = 1 i = 0 #1 0x00007f2e4d893b2e in __glXDisp_Render (cl=<optimized out>, pc=<optimized out>) at glxcmds.c:2034 entry = {bytes = 8, varsize = 0x0} extra = <optimized out> proc = <optimized out> err = <optimized out> req = <optimized out> client = 0x1992070 left = <optimized out> cmdlen = 8 error = 0 commandsDone = 1 hdr = <optimized out> glxc = 0x2018d40 sw = <optimized out> #2 0x00007f2e4d895f68 in __glXDispatch (client=<optimized out>) at glxext.c:581 rendering = 1 '\001' stuff = 0x223f370 opcode = 1 '\001' proc = <optimized out> retval = <optimized out> #3 0x000000000043cc3e in ?? () No symbol table info available. #4 0x000000000042c1ba in ?? () No symbol table info available. #5 0x00007f2e5111ebe5 in __libc_start_main () from /lib64/libc.so.6 No symbol table info available. #6 0x000000000042c501 in _start () No symbol table info available.
(In reply to comment #17) > #0 0x00007f2e4c6c87a5 in _mesa_GenProgramPipelines (n=3553, > pipelines=0x7f2e4d8a2460 <Render_dispatch_tree>) at main/pipelineobj.c:531 > obj = 0x20d9f88 > name = 1 > ctx = 0x21dc3f0 > first = 1 > i = 0 > #1 0x00007f2e4d893b2e in __glXDisp_Render (cl=<optimized out>, I doubt this really meant to call GenProgramPipelines, so it looks like a GLX indirect GL dispatch issue. Reassigning, but I wouldn't hold my breath for that sort of thing getting fixed. :) FWIW, with current xserver Git you can pass -iglx to Xorg to prevent GLX indirect rendering altogether.
*** Bug 79325 has been marked as a duplicate of this bug. ***
Erm, it's an easily reproducible crash bug, yet the status claims it's not a bug and it's not getting fixed? Please explain.
I forgot about this :). whatever happens there, just to inform you... i can't reproduce anymore this bug using xserver 1.16-rc3 and glamor from xserver of course :). So i did't remove any shipped libraries and xserver 1.16 does not crash.
>So i did't remove any shipped libraries and xserver 1.16 does not crash. But after starting supertuxkart and clicking something i get: X Error of failed request: GLXBadRenderRequest Major opcode of failed request: 156 (GLX) Minor opcode of failed request: 1 (X_GLXRender) Serial number of failed request: 3276 Current serial number in output stream: 3277 At least it does not crash xserver, this is better ;).
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.