Bug 60200 - radeon_bo with virtual address referencing mismatch
Summary: radeon_bo with virtual address referencing mismatch
Status: RESOLVED FIXED
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/Gallium/r600 (show other bugs)
Version: git
Hardware: x86-64 (AMD64) Linux (All)
: medium normal
Assignee: Default DRI bug account
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-02-02 18:56 UTC by Martin Andersson
Modified: 2013-02-12 01:06 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments
patch (961 bytes, text/plain)
2013-02-02 18:56 UTC, Martin Andersson
Details
dmesg (60.97 KB, text/plain)
2013-02-02 18:57 UTC, Martin Andersson
Details
weston (5.68 KB, text/plain)
2013-02-02 18:57 UTC, Martin Andersson
Details

Description Martin Andersson 2013-02-02 18:56:55 UTC
Created attachment 74101 [details]
patch

First a disclaimer, this code is unfamiliar to me so there is probably stuff that i have misunderstood.

When i try to start weston it immediatly crashes with the following error:
radeon: Failed to allocate a buffer:
radeon:    size      : 1 bytes
radeon:    alignment : 4096 bytes
radeon:    domains   : 2

and dmesg says:
[   75.092178] radeon 0000:01:00.0: bo ffff8802233f8800 va 0x00000000 conflict with (bo ffff8802219de400 0x0082C000 0x010F6000)
[   75.092195] radeon 0000:01:00.0: bo ffff8802233f8800 don't has a mapping in vm ffff88022275ac00

Everything works fine if i disable the virtual address code, so the issue is in there somewhere.

I tried to analyze it and this is what i came up with:

Consider the following scenario:

radeon_bomgr_create_bo: create gem buffer
kernel: create buffer with handle 1 set ref count to 1
radeon_bomgr_create_bo: create radeon_bo with handle 1 set ref count to 1
radeon_bomgr_create_bo: find virtual address for handle 1
radeon_bomgr_find_va: free list empty use offset 2000
radeon_bomgr_create_bo: map virtual address for handle 1 at offset 2000
kernel: map virtual address for handle 1 at offset 2000

somewhere in userspace: create flink for handle 1
kernel: create flink name 1 for handle 1

radeon_winsys_bo_from_handle: open gem buffer with name 1
kernel: create handle 2 increase ref count to 2
radeon_winsys_bo_from_handle: create radeon_bo with handle 2 set ref count to 1
radeon_winsys_bo_from_handle: find virtual address for handle 2
radeon_bomgr_find_va: free list empty use offset 6000
radeon_winsys_bo_from_handle: map virtual address for handle 2 at offset 6000
kernel: virtual address already mapped for handle 2 use offset 2000

somewhere in userspace: destroy handle 2
somewhere in userspace: decrease handle 2 ref count to 0
radeon_bo_destroy: close gem buffer with handle 2
kernel: decrease ref count to 1
radeon_bo_destroy: free virtual address with offset 2000
radeon_bomgr_free_va: add virtual address with offset 2000 to free list

radeon_bomgr_create_bo: create gem buffer
kernel: create buffer with handle 3 set ref count to 1
radeon_bomgr_create_bo: create radeon_bo with handle 3 set ref count to 1
radeon_bomgr_create_bo: find virtual address for handle 3
radeon_bomgr_find_va: from free list use offset 2000
radeon_bomgr_create_bo: map virtual address for handle 3 at offset 2000
kernel: virtual address conflict offset 2000 already mapped

This is a simplifed scenario what happens in my case. The issue, at least as i see it, is that the virtual address is freed(added to the free list) even though it is still mapped in the kernel. This is because the userspace referencing code does not track the radeon_bo object created in function radeon_bomgr_create_bo.

The attached patch fixes the issue for me, but is probably not the correct fix.

The issue was found and tested on wayland/weston master, mesa master and kernel 3.8.0-rc6
Comment 1 Martin Andersson 2013-02-02 18:57:21 UTC
Created attachment 74102 [details]
dmesg
Comment 2 Martin Andersson 2013-02-02 18:57:37 UTC
Created attachment 74103 [details]
weston


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.