Summary: | [r600g] crash when using xeglthreads | ||
---|---|---|---|
Product: | Mesa | Reporter: | Kevin DeKorte <kdekorte> |
Component: | Drivers/Gallium/r600 | Assignee: | Default DRI bug account <dri-devel> |
Status: | RESOLVED FIXED | QA Contact: | |
Severity: | normal | ||
Priority: | medium | ||
Version: | git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: |
Description
Kevin DeKorte
2010-11-04 09:37:34 UTC
Seems to also happen with this command ./glthreads -n 3 -t glthreads: No explict locking. glthreads: Single display connection. XInitThreads() returned 1 (success) glthreads: creating windows glthreads: creating threads glthreads: Created thread 0x7f83360d3700 glthreads: Created thread 0x7f83358d2700 glthreads: Created thread 0x7f8327fff700 glthreads: 1: GL_RENDERER = Gallium 0.4 on RV635 glthreads: 0: GL_RENDERER = Gallium 0.4 on RV635 glthreads: 2: GL_RENDERER = Gallium 0.4 on RV635 glthreads: r600_priv.h:184: radeon_bo_unmap: Assertion `bo->map_count >= 0' failed. with glthreads you need to press the arrow keys to make it happen backtrace as requested by tilman run -n 6 -t -l Starting program: /home/kdekorte/git/mesa/progs/egl/xeglthreads -n 6 -t -l [Thread debugging using libthread_db enabled] xeglthreads: Using explicit locks around Xlib calls. xeglthreads: Single display connection. xeglthreads: creating windows xeglthreads: creating threads [New Thread 0x7ffff5ae0700 (LWP 29394)] xeglthreads: Created thread 0x7ffff5ae0700 [New Thread 0x7ffff52df700 (LWP 29395)] xeglthreads: Created thread 0x7ffff52df700 [New Thread 0x7ffff4ade700 (LWP 29396)] xeglthreads: Created thread 0x7ffff4ade700 [New Thread 0x7fffeffff700 (LWP 29397)] xeglthreads: Created thread 0x7fffeffff700 [New Thread 0x7fffef7fe700 (LWP 29398)] xeglthreads: Created thread 0x7fffef7fe700 [New Thread 0x7fffeeffd700 (LWP 29399)] xeglthreads: Created thread 0x7fffeeffd700 xeglthreads: 0: GL_RENDERER = Gallium 0.4 on RV635 xeglthreads: 4: GL_RENDERER = Gallium 0.4 on RV635 xeglthreads: 3: GL_RENDERER = Gallium 0.4 on RV635 xeglthreads: 2: GL_RENDERER = Gallium 0.4 on RV635 xeglthreads: 1: GL_RENDERER = Gallium 0.4 on RV635 xeglthreads: 5: GL_RENDERER = Gallium 0.4 on RV635 xeglthreads: r600_priv.h:184: radeon_bo_unmap: Assertion `bo->map_count >= 0' failed. Program received signal SIGABRT, Aborted. [Switching to Thread 0x7fffef7fe700 (LWP 29398)] 0x0000003f0e634065 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install expat-2.0.1-10.fc13.x86_64 glibc-2.12.90-18.x86_64 libX11-1.3.4-3.fc14.x86_64 libXau-1.0.6-1.fc14.x86_64 libXdamage-1.1.3-1.fc14.x86_64 libXext-1.1.2-2.fc14.x86_64 libXfixes-4.0.5-1.fc14.x86_64 libXxf86vm-1.1.0-1.fc13.x86_64 libdrm-2.4.23-0.1.20101019.fc14.x86_64 libgcc-4.5.1-4.fc14.x86_64 libstdc++-4.5.1-4.fc14.x86_64 libtalloc-2.0.1-1.fc13.x86_64 libv4l-0.8.1-1.fc14.x86_64 libxcb-1.7-1.fc14.x86_64 (gdb) bt #0 0x0000003f0e634065 in raise () from /lib64/libc.so.6 #1 0x0000003f0e635a16 in abort () from /lib64/libc.so.6 #2 0x0000003f0e62c8a5 in __assert_fail () from /lib64/libc.so.6 #3 0x00007ffff76e4653 in radeon_bo_unmap (_mgr=<value optimized out>) at r600_priv.h:184 #4 radeon_bo_pbmgr_flush_maps (_mgr=<value optimized out>) at radeon_bo_pb.c:264 #5 0x00007ffff7730b91 in r600_context_flush (ctx=0x1bc6f30) at r600_hw_context.c:1085 #6 0x00007ffff6f96468 in st_context_flush (stctxi=<value optimized out>, flags=9, fence=<value optimized out>) at state_tracker/st_manager.c:508 #7 0x00007ffff7968971 in egl_g3d_swap_buffers (drv=<value optimized out>, dpy=<value optimized out>, surf=0x2156820) at common/egl_g3d_api.c:602 #8 0x00007ffff7bb789c in eglSwapBuffers (dpy=0x612ba0, surface=0x2156820) at eglapi.c:681 #9 0x000000000040310f in draw_loop (p=0x604280) at xeglthreads.c:301 #10 thread_function (p=0x604280) at xeglthreads.c:567 #11 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #12 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 gdb> thread apply all bt Thread 7 (Thread 0x7fffeeffd700 (LWP 29399)): #0 0x0000003f0f20e1ac in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x0000003f0f2094a4 in _L_lock_997 () from /lib64/libpthread.so.0 #2 0x0000003f0f2092ba in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00000000004032cc in draw_loop (p=0x6042c8) at xeglthreads.c:249 #4 thread_function (p=0x6042c8) at xeglthreads.c:567 #5 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #6 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7fffef7fe700 (LWP 29398)): #0 0x0000003f0e634065 in raise () from /lib64/libc.so.6 #1 0x0000003f0e635a16 in abort () from /lib64/libc.so.6 #2 0x0000003f0e62c8a5 in __assert_fail () from /lib64/libc.so.6 #3 0x00007ffff76e4653 in radeon_bo_unmap (_mgr=<value optimized out>) at r600_priv.h:184 #4 radeon_bo_pbmgr_flush_maps (_mgr=<value optimized out>) at radeon_bo_pb.c:264 #5 0x00007ffff7730b91 in r600_context_flush (ctx=0x1bc6f30) at r600_hw_context.c:1085 #6 0x00007ffff6f96468 in st_context_flush (stctxi=<value optimized out>, flags=9, fence=<value optimized out>) at state_tracker/st_manager.c:508 #7 0x00007ffff7968971 in egl_g3d_swap_buffers (drv=<value optimized out>, dpy=<value optimized out>, surf=0x2156820) at common/egl_g3d_api.c:602 #8 0x00007ffff7bb789c in eglSwapBuffers (dpy=0x612ba0, surface=0x2156820) at eglapi.c:681 #9 0x000000000040310f in draw_loop (p=0x604280) at xeglthreads.c:301 #10 thread_function (p=0x604280) at xeglthreads.c:567 #11 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #12 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7fffeffff700 (LWP 29397)): #0 0x0000003f0e6ad5ed in nanosleep () from /lib64/libc.so.6 #1 0x0000003f0e6dce14 in usleep () from /lib64/libc.so.6 #2 0x0000000000402bba in draw_loop (p=0x604238) at xeglthreads.c:307 #3 thread_function (p=0x604238) at xeglthreads.c:567 #4 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #5 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7ffff4ade700 (LWP 29396)): #0 0x0000003f0f20e1ac in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x0000003f0f2094a4 in _L_lock_997 () from /lib64/libpthread.so.0 #2 0x0000003f0f2092ba in pthread_mutex_lock () from /lib64/libpthread.so.0 ---Type <return> to continue, or q <return> to quit--- #3 0x000000000040319c in draw_loop (p=0x6041f0) at xeglthreads.c:299 #4 thread_function (p=0x6041f0) at xeglthreads.c:567 #5 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #6 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7ffff52df700 (LWP 29395)): #0 0x0000003f0f20e1ac in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x0000003f0f2094a4 in _L_lock_997 () from /lib64/libpthread.so.0 #2 0x0000003f0f2092ba in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x000000000040319c in draw_loop (p=0x6041a8) at xeglthreads.c:299 #4 thread_function (p=0x6041a8) at xeglthreads.c:567 #5 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #6 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7ffff5ae0700 (LWP 29394)): #0 0x0000003f0f20e1ac in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x0000003f0f2094a4 in _L_lock_997 () from /lib64/libpthread.so.0 #2 0x0000003f0f2092ba in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00000000004032cc in draw_loop (p=0x604160) at xeglthreads.c:249 #4 thread_function (p=0x604160) at xeglthreads.c:567 #5 0x0000003f0f206d5b in start_thread () from /lib64/libpthread.so.0 #6 0x0000003f0e6e427d in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7ffff7ba7740 (LWP 29391)): #0 0x0000003f0e6ad5ed in nanosleep () from /lib64/libc.so.6 #1 0x0000003f0e6dce14 in usleep () from /lib64/libc.so.6 #2 0x0000000000402596 in event_loop (argc=<value optimized out>, argv=<value optimized out>) at xeglthreads.c:389 #3 main (argc=<value optimized out>, argv=<value optimized out>) at xeglthreads.c:743 Appears to be fixed with mesa git as of 12/21/10 and kernel 2.6.37-rc6 |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.