SuperTuxRacer 0.7(which is Irlicht based) Memory Faults on initialization when using gallium libGL.so, but it works w/o any problems using Mesa Classic libGL. I gues it happens due to VBO bugs in r300g, which arent avaible in classic Mesa. I know it works using r600g because of answers from many persons. I am using fresh airlied's D-R-T kernel with DDX/DRI/Mesa from Master. I got multilib system, but STK and 99% apps I am using are 64bit. Gfx card is FireGL V5200/Radeon X1600 inside T60p. There are logs, etc: http://carme.pld-linux.org/~evil/radeon/STK_r300g/ There is original thread: http://phoronix.com/forums/showthread.php?33119-SuperTuxRacer-0.7-and-r300g-MemoryFault And like Qaridarium stated, I got also similar problem with warzone2100 (when looking at gdb backtrace related to VBO/Vertices) After some playing with vars I found that game STK doesnt work with LIBGL_ALWAYS_SOFTWARE=1 (Gallium on softpipe), but works with LIBGL_ALWAYS_INDIRECT=1 (because of fallback to Mesa Classic). Not sure about warzone witg softpipe, because it segfaults after starting campaign, and waiting for menu on siftpipe would take a years.
other games are hit by this bug to exampel warzone2100 " ass@ass:~$ warzone2100 r300: DRM version: 2.8.0, Name: ATI R420, ID: 0x4a49, GB: 3, Z: 1 r300: GART size: 253 MB, VRAM size: 256 MB r300: AA compression: NO, Z compression: NO, HiZ: NO 41 ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory. No function contains program counter for selected frame. Saved dump file to '/tmp/warzone2100.gdmp-8y1Dac' If you create a bugreport regarding this crash, please include this file. Speicherzugriffsfehler''MemoryFault'' (Speicherabzug geschrieben) " http://pastebin.com/k5xmQn0D
git bisect log git bisect start # good: [211725eccdda4a9b7959e1f458d253b32186ff6a] radeon: Remove setup of the old dri/ meta code, which is now unused. git bisect good 211725eccdda4a9b7959e1f458d253b32186ff6a # bad: [965ab5fed3c734e6205070e6cf40544a44b5dbf6] svga: Preserve src swizzles in submit_op2/3/4. git bisect bad 965ab5fed3c734e6205070e6cf40544a44b5dbf6 # bad: [9e725b9123c41acf84410cb32d28f729b1e5c9e4] r300g: fix texture border color for float formats git bisect bad 9e725b9123c41acf84410cb32d28f729b1e5c9e4 # bad: [a476ca1fd1b4e76e31c9babfd7fb2a54a09f21d3] st/mesa: Use blend equation and function of first render target for all render targets if ARB_draw_buffers_blend is not supported git bisect bad a476ca1fd1b4e76e31c9babfd7fb2a54a09f21d3 # good: [3d5ac32f3bf95ceb9f3f03d6dedea5445ed35b18] r300g: consolidate emission of common draw regs git bisect good 3d5ac32f3bf95ceb9f3f03d6dedea5445ed35b18 # bad: [d123959ff75b2a83e02f4594f3e072c31c7fd8d9] r300g: Remove redundant initialization. git bisect bad d123959ff75b2a83e02f4594f3e072c31c7fd8d9 # bad: [a0c293ec117c8a6f471061076ba87e245759e0f6] r300g: put indices in CS if there's just a few of them and are in user memory git bisect bad a0c293ec117c8a6f471061076ba87e245759e0f6 # good: [476cec37d615df7c7329ef74d4a7ea7200b2d8fb] r300g: do not create a user buffer struct for misaligned ushort indices fallback git bisect good 476cec37d615df7c7329ef74d4a7ea7200b2d8fb # bad: [437583ea637ab402a06ae6683af6df35d52512d4] r300g: cleanup the draw functions git bisect bad 437583ea637ab402a06ae6683af6df35d52512d4
I can't reproduce this. Please try: git clean -fdx and rebuild Mesa. Alternatively, you may try and set either of these environment variables and see if it helps: RADEON_THREAD=0 RADEON_DEBUG=noimmd
(In reply to comment #3) > I can't reproduce this. airlied also on nearly identical hardware to mine :/ > Please try: > > git clean -fdx Didnt helped > > and rebuild Mesa. I know that ;) > Alternatively, you may try and set either of these environment variables and > see if it helps: > > RADEON_THREAD=0 This doesnt change anything for this two games > RADEON_DEBUG=noimmd This fixes SuperTuxKart 0.7 and Warzone2100 doom3/ut2k4 still are broken in the same time(commits), but both differently, so I will probably open new bug reports. And what noimmd does?
noimmd disables the "immediate mode" style of rendering, which means vertices are written directly to the command stream for the GPU instead of being stored in vertex buffers. There is something fishy here because I can't reproduce it for whatever reason, so there is something wrong either with my machine or yours. Bug 34336 is likely a duplicate of this one.
I wonder whether the reason is that my system is 32-bit and yours is 64-bit.
(In reply to comment #6) > I wonder whether the reason is that my system is 32-bit and yours is 64-bit. I have multilib system, so it wasnt problem to rebuild it on builders and transfer 32bit binary..and..IT WORKS :) And to clarify, I got nearly symetrical envs(multilib) on thinkpad, and similary(chroots) on PLD builders. And now I am sure that ut2k4/doom3 problems arent related to this bug, because doom3 is only 32bit, and both ut2k4 binaries hangs in menu or segfaults on start(depending on vars)
*** Bug 34501 has been marked as a duplicate of this bug. ***
*** Bug 34336 has been marked as a duplicate of this bug. ***
This is possibly a different bug (or also a gdb one), but when I run supertuxkart under gdb I get thousands of the following messages: [New Thread 0xb6afdb70 (LWP 9974)] [Thread 0xb6afdb70 (LWP 9974) exited] [New Thread 0xb6afdb70 (LWP 9975)] [Thread 0xb6afdb70 (LWP 9975) exited] and performance is less then half (fps are shown when pressing F12). Running without gdb or with RADEON_THREAD=0 + gdb fixes that.
I did tests with kwin crashes and they start with 2a904fd ("st/mesa: set vertex arrays state only when necessary"). My system is 64-bit too, as indicated in original bug report.
Hooray, I've nailed it! Look at setup_interleaved_attribs in st_draw.c. There's that little snippet that computes minimum from array[...]->Ptr and... it's wrong! ->Ptr can be very well NULL, so when there are two arrays, with one having offset 0 (and thus NULL ->Ptr), and the other having non-zero offset, the non-zero value is taken as minimum, which leads to negative velements[attr].src_offset being assigned later. The trick is: that negative value is cast to unsigned, so it ends up being a very large number. Later (in r300g), the src_offset is added to some pointer. On 32bit machines, the pointer overflows and the overall result is as if subtraction was performed, yielding correct result. On 64bit machine the pointer gets messed up instead, resulting in segmentation fault. Changing the minimum-computing code to /* Find the lowest address. */ const GLubyte *low_addr = NULL; if(vpv->num_inputs) { low_addr = arrays[vp->index_to_input[0]]->Ptr; for (attr = 1; attr < vpv->num_inputs; attr++) { const GLubyte *start = arrays[vp->index_to_input[attr]]->Ptr; low_addr = MIN2(low_addr, start); } } fixes segfaults with blender. It is also beneficial to add assert(velements[attr].src_offset >= 0 && velements[attr].src_offset < 2000000000); in st_draw.c:369 (that exposes the bug on 32bit machines). The (trivial) test code will be attached.
Created attachment 43668 [details] (trivial) test case
Created attachment 43675 [details] [review] Fix proposed by Wiktor Janas This patch finally fixes this bug :)
The patch landed in master. Thanks for figuring out this bug. Hell of a job.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.