Bug 103653 - Unreal segfault since gallium/u_threaded: avoid syncs for get_query_result
Summary: Unreal segfault since gallium/u_threaded: avoid syncs for get_query_result
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/Gallium/radeonsi (show other bugs)
Version: git
Hardware: x86-64 (AMD64) Linux (All)
: medium normal
Assignee: Default DRI bug account
QA Contact: Default DRI bug account
Depends on:
Reported: 2017-11-09 18:28 UTC by Andy Furniss
Modified: 2017-11-10 21:38 UTC (History)
1 user (show)

See Also:
i915 platform:
i915 features:


Description Andy Furniss 2017-11-09 18:28:32 UTC
R9 285 Tonga, since

commit 244536d3d6b40c1763d1e2b3e7676665afa69101
Author: Nicolai Hähnle <nicolai.haehnle@amd.com>
Date:   Sun Oct 22 17:38:51 2017 +0200

    gallium/u_threaded: avoid syncs for get_query_result
    Queries should still get marked as flushed when flushes are executed
    asynchronously in the driver thread.
    To this end, the management of the unflushed_queries list is moved into
    the driver thread.

I get a segfault starting Unreal Elemental demo or unreal tournament.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe34f5700 (LWP 7403)]
tc_call_end_query (pipe=0x5301430, payload=0x54c2a48) at util/u_threaded_context.c:374
374        if (!tq->head_unflushed.next)
(gdb) bt
#0  tc_call_end_query (pipe=0x5301430, payload=0x54c2a48) at util/u_threaded_context.c:374
#1  0x00007ffff11bfdaf in tc_batch_execute (job=job@entry=0x54c27c0, thread_index=thread_index@entry=0) at util/u_threaded_context.c:96
#2  0x00007ffff1083830 in util_queue_thread_func (input=input@entry=0x4c37fe0) at u_queue.c:271
#3  0x00007ffff10834d7 in impl_thrd_routine (p=<optimized out>) at ../../include/c11/threads_posix.h:87
#4  0x00007ffff7bc5434 in start_thread () from /lib/libpthread.so.0
#5  0x00007ffff6a1206d in clone () from /lib/libc.so.6

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.