drm_prime walks a *list* of dmabuf on drm_gem_prime_handle_to_fd() drm_gem_prime_fd_to_handle() and upon object close, for a kind-off pointless uABI but meh. /* initial implementaton using a linked list - todo hashtab */ struct drm_prime_file_private { struct list_head head; struct mutex lock; }; Running 4KiB-thrash-dmabuf-* is enough to trigger lockups [94466.160428] INFO: rcu_sched self-detected stall on CPU [94466.160460] 1-...: (5250 ticks this GP) idle=ed9/140000000000001/0 softirq=17706827/17706827 fqs=5222 [94466.160473] (t=5250 jiffies g=2984079 c=2984078 q=2663) [94466.160485] Task dump for CPU 1: [94466.160494] gem_concurrent_ R running task 0 22751 22747 0x0000000a [94466.160508] ffffffff81a2e4c0 ffff88027fd03e70 ffffffff81097ac9 0000000000000001 [94466.160526] ffffffff81a2e4c0 ffff88027fd03e88 ffffffff81099c37 0000000000000002 [94466.160546] ffff88027fd03eb8 ffffffff810b9cee ffff88027fd157c0 ffffffff81a2e4c0 [94466.160563] Call Trace: [94466.160571] <IRQ> [<ffffffff81097ac9>] sched_show_task+0xa9/0x110 [94466.160592] [<ffffffff81099c37>] dump_cpu_task+0x37/0x40 [94466.160601] [<ffffffff810b9cee>] rcu_dump_cpu_stacks+0x8e/0xe0 [94466.160611] [<ffffffff810bd9b8>] rcu_check_callbacks+0x4c8/0x770 [94466.160621] [<ffffffff810c1239>] update_process_times+0x39/0x60 [94466.160632] [<ffffffff810cdc0b>] tick_periodic+0x2b/0x70 [94466.160641] [<ffffffff810cdc75>] tick_handle_periodic+0x25/0x70 [94466.160652] [<ffffffff81034ab8>] local_apic_timer_interrupt+0x38/0x60 [94466.160664] [<ffffffff8147e15d>] smp_apic_timer_interrupt+0x3d/0x50 [94466.160675] [<ffffffff8147c94c>] apic_timer_interrupt+0x7c/0x90 [94466.160682] <EOI> [<ffffffff8134f643>] ? drm_prime_remove_buf_handle_locked+0x33/0x80 [94466.160703] [<ffffffff8136f00c>] ? dma_buf_put+0x1c/0x40 [94466.160713] [<ffffffff813366f8>] drm_gem_object_release_handle+0x88/0xa0 [94466.160724] [<ffffffff8127ec7f>] idr_for_each+0x9f/0xe0 [94466.160733] [<ffffffff81336670>] ? drm_gem_object_handle_unreference_unlocked+0x110/0x110 [94466.160746] [<ffffffff81336d60>] drm_gem_release+0x20/0x30 [94466.160755] [<ffffffff81335bd3>] drm_release+0x3e3/0x4d0 [94466.160766] [<ffffffff8115dfce>] __fput+0xce/0x1c0 [94466.160775] [<ffffffff8115e0fe>] ____fput+0xe/0x10 [94466.160784] [<ffffffff8108b203>] task_work_run+0x73/0x90 [94466.160794] [<ffffffff81073217>] do_exit+0x367/0xa80 [94466.160885] [<ffffffff8117a864>] ? mntput+0x24/0x40 [94466.160895] [<ffffffff810745d3>] do_group_exit+0x43/0xb0 [94466.160904] [<ffffffff81074654>] SyS_exit_group+0x14/0x20 [94466.160913] [<ffffffff8147bc97>] entry_SYSCALL_64_fastpath+0x12/0x66 and makes every dmabuf test orders of magnitude slower than it should be.
And drm_intel_bo_gem_create_from_prime doesn't help.
Chris, can you confirm which platform(s) is(are) impacted and if this is resolved (if so can you point out sha1)?
Everything is affected. And users of libdrm_intel doubly so.
As reference: Chris' patch available here: https://patchwork.freedesktop.org/series/12787/
(In reply to yann from comment #4) > As reference: Chris' patch available here: > https://patchwork.freedesktop.org/series/12787/ and also : https://patchwork.freedesktop.org/series/12782/ (for libdrm)
Step 1: kernel commit da2bf7e805921494df5ebe18e99c790b1fbb450c Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Sep 26 21:44:14 2016 +0100 drm: Convert prime dma-buf <-> handle to rbtree
libdrm commit 9e24d0c54b162b443e3e144740deb0e1d5f8760b Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Sep 22 14:44:50 2016 +0100 intel: Migrate handle/name lookups from linear lists to hashtables Walking a linear list to find a matching PRIME handle or flinked name does not scale and becomes a major burden with just a few objects. That said, the fixed size hash is not much better, it just buckets the look into a few separate chains rather than one long one. References: https://bugs.freedesktop.org/show_bug.cgi?id=94631 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.