Bug 6209

Summary: [mach64] AGP DMA buffers not mapped correctly
Product: DRI Reporter: George - <fufutos610>
Component: DRM/otherAssignee: Default DRI bug account <dri-devel>
Status: RESOLVED FIXED QA Contact:
Severity: enhancement    
Priority: high CC: lrbalt
Version: DRI git   
Hardware: x86 (IA32)   
OS: Linux (All)   
Whiteboard:
i915 platform: i915 features:
Bug Depends on:    
Bug Blocks: 6242    
Attachments:
Description Flags
xorg.conf
none
xorg.log
none
lspci none

Description George - 2006-03-10 11:31:07 UTC
The problem can be triggered as follows:

add the following option in the Device section of xorg.conf:
Option "local_textures" "true"
and run texobj from Mesa/progs/demos

In this case DRI allocates a texture heap in the FB (instead of AGP which
is the default) and tries to upload the textures to the FB using DMA:
it allocates a DMA buffer but when it tries to copy the texture in the DMA
buffer it gets killed.

If I comment out the memcpy() in mach64UploadLocalSubImage() (line 588 of
Mesa/src/mesa/drivers/dri/mach64/mach64_texmem.c), texobj runs ok, with 
random textures ofcourse.
Comment 1 George - 2006-03-10 11:32:30 UTC
Created attachment 4883 [details]
xorg.conf
Comment 2 George - 2006-03-10 11:34:03 UTC
Created attachment 4884 [details]
xorg.log
Comment 3 George - 2006-03-10 11:36:08 UTC
Created attachment 4885 [details]
lspci

Also, forgot to mention all software (drm, Mesa, Xorg, ati) is from cvs
Comment 4 George - 2006-03-11 12:20:10 UTC
Change severity to enhancement. Mach64 drm is not included in the kernel.

I started looking at the mach64 drm and plan to try fixing it.
Comment 5 George - 2006-03-11 23:30:22 UTC
The problem was that dev->agp_buffer_token was not set, and mach64 drm would
map the DMA buffers from linear address 0x0. With the one-liner below, it
correctly maps the DMA buffers from the same linear address as the vertex bufs.

I now plan to make mach64_dma_vertex use DMA buffers.

--- mach64_dma.orig     2006-03-11 14:24:41.000000000 +0200
+++ mach64_dma.c        2006-03-11 14:25:05.000000000 +0200
@@ -834,6 +834,7 @@
                        mach64_do_cleanup_dma(dev);
                        return DRM_ERR(ENOMEM);
                }
+               dev->agp_buffer_token = init->buffers_offset;
                dev->agp_buffer_map =
                    drm_core_findmap(dev, init->buffers_offset);
                if (!dev->agp_buffer_map) {
Comment 6 George - 2006-04-04 10:47:54 UTC
What about the following approach:

Do not map the DMA buffers to user-space.

For dma_vertex, nothing changes.

For dma_blit, the client submits a pointer to user-space memory. 
In the AGP case nothing changes since the default method is AGP texturing (in
fact "local_textures" do not currently work with AGP, bug #6209). 
In the PCI case, the simple thing is copy_from_user to a private DMA buffer. If
the performance regression is unacceptable, we can change the blit ioctl to
submit w/h/pitch parameters and turn the memcpy currently done in user-space to
copy_from_user. I presume that its easy to determine that the pointer actually
points to memory owned by the process.

Hopefully, we can reuse the current buffer management routines. If it is
possible to do drmAddBufs without drmMapBufs, then very little changes are
required (I saw a comment in drm_mapbufs that PCI buffers are actually mapped in
drm_addbufs ...).

Sorry if I am waisting your time with uninformed assumptions,
george.
Comment 7 George - 2006-04-04 10:51:17 UTC
ignore the previous comment, probably bugzilla cookies ...
Comment 8 George - 2006-04-12 10:50:17 UTC
change title to reflect bug more precisely.
Comment 9 George - 2006-09-16 22:57:51 UTC
Can I commit this ?
Comment 10 George - 2006-09-25 13:05:11 UTC
(In reply to comment #9)
> Can I commit this ?

It seems that I don't have commit access to mesa/drm. Can I get commit access to
mesa/drm ? I have commit access to xorg (account name: gsap7).
Comment 11 George - 2006-10-02 13:00:39 UTC
Commited: eea150e776657faca7d5b76aca75a33dc74fbc9d

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.