Here is my first attempt to use DMA buffers for vertex submission. It depends on https://bugs.freedesktop.org/show_bug.cgi?id=6209 As this is my first attempt to hack on the drm, any comments would be greatly appreciated.
Created attachment 4911 [details] [review] mach64: DMA buffers for vertices patch for drm
Created attachment 4912 [details] [review] mach64-dri: DMA buffers for vertices patch for dri
Created attachment 4913 [details] [review] mach64-ddx: DMA buffers for vertices patch for ddx
Created attachment 4925 [details] [review] mach64-drm: DMA buffers for vertices - try 1 Set the discard flag of the DMA correctly.
Created attachment 5123 [details] [review] mach64-drm: DMA buffers for vertices - try 2 enable test of register values in the vertex buffer.
(In reply to comment #5) > Created an attachment (id=5123) [edit] > mach64-drm: DMA buffers for vertices - try 2 > > enable test of register values in the vertex buffer. It's good to see this driver getting some attention again, I'm sorry I don't have time to help out with the code right now. I wanted to point out that if you go this route (using client DMA buffers for vertices), you'll need to unmap the buffers from userspace before verifying and submitting them to make this secure. If you look at the CVS history, you'll see that we actually used to use DMA buffers, but changed to the current method when beginning the work to make the driver secure. We decided that copying the data from client memory into a pool of unmapped DMA buffers in the kernel would be faster than unmapping and re-mapping the buffers for each submit. Copying the userspace data into DMA buffers allocated from the mapped pool was a stopgap until we had a way to allocate unmapped buffers in the kernel for this purpose. It's been a while since I looked at the code in detail, but the problem at the time was a limitation of the DRM buffer management infrastructure. As I recall, there wasn't an easy way to create two sets of DMA buffers, one mapped to userspace, and one which wasn't mapped. You may want to look into the mailing list archives for more discussion on this. I think that this was the last remaining issue in securing the driver. -Leif
(In reply to comment #6) > It's been a while since I looked at the code in detail, but the problem at the > time was a limitation of the DRM buffer management infrastructure. As I recall, > there wasn't an easy way to create two sets of DMA buffers, one mapped to > userspace, and one which wasn't mapped. I did something that sounds like what you're trying to do, in the Savage driver for command DMA. Instead of using the DMA buffer infractructure I simply created a map of a chunck of AGP memory that can't be mapped to unprivileged clients. The DRM uses that map as a command buffer with its own ageing and buffer management implementation. I was going to try something like that in the Mach64 driver. Not sure how feasible this is or if there are any hardware issues that may prevent this.
(In reply to comment #6) > (In reply to comment #5) > > Created an attachment (id=5123) [edit] [edit] > > mach64-drm: DMA buffers for vertices - try 2 > > > > enable test of register values in the vertex buffer. > > It's good to see this driver getting some attention again, I'm sorry I don't > have time to help out with the code right now. > > I wanted to point out that if you go this route (using client DMA buffers for > vertices), you'll need to unmap the buffers from userspace before verifying and > submitting them to make this secure. If you look at the CVS history, you'll see > that we actually used to use DMA buffers, but changed to the current method when > beginning the work to make the driver secure. We decided that copying the data > from client memory into a pool of unmapped DMA buffers in the kernel would be > faster than unmapping and re-mapping the buffers for each submit. Copying the > userspace data into DMA buffers allocated from the mapped pool was a stopgap > until we had a way to allocate unmapped buffers in the kernel for this purpose. > Grrh, I knew I was missing something basic when I tried to touch this stuff ... also a bit stupid of me to lurk and search mesa-dev for mach64 DMA instead of dri-devel. I'll probably do my "homework" over the weekend. Just a question to make sure I am not completely mixed up: As it stands now, is drm_blit secure ? I mean a malicious client can still change the contents of the DMA buffer after submission to the drm in the case of a blit also, esp. the first MACH64_HOSTDATA_BLIT_OFFSET bytes. > It's been a while since I looked at the code in detail, but the problem at the > time was a limitation of the DRM buffer management infrastructure. As I recall, > there wasn't an easy way to create two sets of DMA buffers, one mapped to > userspace, and one which wasn't mapped. You may want to look into the mailing > list archives for more discussion on this. I think that this was the last > remaining issue in securing the driver. > > -Leif Thanks for your comments and your work on open source drivers, george.
What about the following approach: Do not map the DMA buffers to user-space. For dma_vertex, nothing changes. For dma_blit, the client submits a pointer to user-space memory. In the AGP case nothing changes since the default method is AGP texturing (in fact "local_textures" do not currently work with AGP, bug #6209). In the PCI case, the simple thing is copy_from_user to a private DMA buffer. If the performance regression is unacceptable, we can change the blit ioctl to submit w/h/pitch parameters and turn the memcpy currently done in user-space to copy_from_user. I presume that its easy to determine that the pointer actually points to memory owned by the process. Hopefully, we can reuse the current buffer management routines. If it is possible to do drmAddBufs without drmMapBufs, then very little changes are required (I saw a comment in drm_mapbufs that PCI buffers are actually mapped in drm_addbufs ...). Sorry if I am waisting your time with uninformed assumptions, george.
(In reply to comment #9) > What about the following approach: > > Do not map the DMA buffers to user-space. > > [...] > > Hopefully, we can reuse the current buffer management routines. If it is > possible to do drmAddBufs without drmMapBufs, then very little changes are > required (I saw a comment in drm_mapbufs that PCI buffers are actually mapped > in drm_addbufs ...). > I'am trying to tackle this by mapping the DMA buffers READ_ONLY. My current understanding is that PCI DMA buffers do actually get mapped in drm_mapbufs: drm_mapbufs -> do_mmap(..., 0) -> drm_mmap -> drm_mmap_dma If this is correct, can someone piggyback the following change at the description of drm_mapbufs? * Maps the AGP, SG or PCI buffer region with do_mmap(), and copies information * about each buffer into user space. For PCI buffers, it calls do_mmap() with * offset equal to 0, which drm_mmap interpretes as PCI buffers and calls * drm_mmap_dma().
Start over. The following patches implement the suggestion at comment 9. For AGP, they reserve the AGP memory to be used for DMA buffers as READ_ONLY. For PCI, they make drm_mmap_dma to return as soon as it is called to prevent mapping of the DMA buffers, this will have to be handled with a flag in drm_buf_desc_t. mach64_dma_blit was also converted to submit a pointer to user-space and subsituted the memcpy currently done in user-space with a copy_from_user (I presume this is secure or can be done so easily ...). Wrt performace for PCI, I see a ~12% speedup for some Mesa demos I tested. Also, the patches contain some other mainly consmetic changes: o factor out from mach64_dma_dispatch_vertex the code to reclaim an used buffer, it is now used by mach64_dma_dispatch_blit also o factor out from mach64_freelist_get the code to reclaim a completed buffer, this was to improve readability for me o move the memory reservation for the DMA ring in the PCI case to DDX, this unifies a couple of PCI/AGP code paths for ring memory in the drm Hope haven't managed to enter ingore lists with this bug report, any comments are greatly appreciated, george.
Created attachment 5269 [details] [review] private DMA buffers / mach64-ddx
Created attachment 5270 [details] [review] private DMA buffers / mach64-dri
Created attachment 5271 [details] [review] private DMA buffers / mach64-drm
Created attachment 5283 [details] [review] private DMA buffers / mach64-ddx - try 1 Add DRM_PCI_READ_ONLY flag in drmBufDescFlags. This flag has also to be added at o libdrm/xf86drm.h o xorg/hw/xfree86/os-support/xf86drm.h
Created attachment 5284 [details] [review] private DMA buffers / mach64-drm - try 1 Add DRM_PCI_READ_ONLY flag in drmBufDescFlags. When drm_mmap_dma sees this flag (set from drmAddBufs), it maps the PCI DMA buffers as read-only (I just copied the code from drm_mmap). An additional flag is needed, since PCI DMA buffers do not have an associated map.
Anybody willing to review this ? Needless to say, I am willing to do any changes or submit in pieces if this is in the right direction and I am not still confused.
I'd like to review your patches. Someone needs to kick my b*tt so I get a card and try this stuff. I also got a private email from someone interested in mach64. I suggested to try your patches. Give me a few weekends and don't forget to remind me every once in a while. ;-)
> Give me a few weekends and don't forget > to remind me every once in a while. ;-) what are doing this weekend ? do you like reviewing drm patches ;-)
(In reply to comment #19) > > Give me a few weekends and don't forget > > to remind me every once in a while. ;-) > > what are doing this weekend ? do you like reviewing drm patches ;-) Yep. I haven't forgotten about it yet. I spent the last few evenings upgrading my home box to Dapper, so I have now modular Xorg by default. I also installed a PCI Mach64 with 4MB Ram. I'll get and build the relevant sources from GIT/CVS soon. I can't promise anything for this weekend though. I have to watch Soccer World Cup on Saturday (Germany against Sweden) ;-) and Sunday is booked for a company family event. Assigning to myself and adding dri-devel to the CC list to maintain visibility.
Created attachment 6234 [details] private DMA buffers / mach64-drm - try 2 This is the same as previous version with the mach64 version major bumped. It is a patchset with different commits for the preparation code, the changes to the DRM core and lastly the backwards incompatible changes to the mach64 DRM.
Created attachment 6235 [details] [review] private DMA buffers / mach64-ddx - try 2 Require a new mach64 DRM, also free the PCI DMA ring memory.
Created attachment 6236 [details] [review] private DMA buffers / mach64-dri - try 1 Require new mach64 DRM.
Created attachment 6241 [details] [review] private DMA buffers / mach64-ddx - try 3 stupid: check that PCI DMA ring was actually allocated before freeing.
Still untested, but this looks pretty good. You're not adding any performance penalty because someone has to copy data into the DMA buffers for blits. You just moved the copy from user space to kernel space and eliminated an ioctl, which even improved performance. Neat! It's also nice that you bundled all changes that affect the DRM interfaces in one diff. I have just a few little comments for improvements: - You said it yourself, mach64_freelist_put should be moved to mach64_dma.c - Please also bump the driver date in mach64_drv.h - mach64FireBlitLocked should be able to handle EAGAIN or pass it back to the caller. This is a new error condition for the blit ioctl that should be handled more gracefully. This sort of error was handled in mach64GetBufferLocked previously. Another thing to keep in mind is that you don't have a solution for BSD because you only implement the DRM_PCI_BUFFER_RO flag for linux. I'm not sure how many people care about mach64 on BSD. I guess on BSD the flag would simply be ignored, so PCI DMA buffers would still be mapped for writing.
Created attachment 6270 [details] private DMA buffers / mach64-drm - try 3 - mach64_freelist_put moved to mach64_dma.c - bump the driver date in mach64_drv.h
Created attachment 6271 [details] [review] private DMA buffers / mach64-dri - try 2 Handle EAGAIN in mach64FireBlitLocked: call drmCommandWrite up to MACH64_TIMEOUT times when EAGAIN is returned. Also handle EAGAIN in mach64FlushVerticesLocked.
Any chance that this gets reviewed for X.org 7.2 ? thanks, george.
(In reply to comment #28) > Any chance that this gets reviewed for X.org 7.2 ? > > thanks, > george. > I just made it a blocker for 7.2
(In reply to comment #29) > (In reply to comment #28) > > Any chance that this gets reviewed for X.org 7.2 ? > > > > thanks, > > george. > > > > I just made it a blocker for 7.2 Sorry. I have no good excuse for neglecting this for so long. My bad conscience is bugging me and in fact I just started looking at this again last night. Alex, I suspect you are more fluent in git and more up-to-date on current development than I am, so feel free to beat me to it. There are some changes that affect the shared DRM code (not mach64-specific). I guess Dave Airlie should give his ok before this gets committed. Otherwise, last time I reviewed this it looked pretty good and George addressed the issues I pointed out very quickly. So I am confident that this will be OK for 7.2. What's the planned release date?
(In reply to comment #30) > So I am confident that this will be OK for 7.2. What's > the planned release date? > http://wiki.x.org/wiki/ReleaseSchedule
I've gone over your latest patches. They look good to me. I applied them to my local git trees and my Mesa working copy and tested them briefly on my amd64 with a PCI mach64. They seem to work nicely. I ran into some unrelated problems with the latest DRM that I want to get resolved first before pushing your patches upstream. The latest Mesa seems to have some problems too, regarding texture management. However, I saw those before applying your Mesa patch, too. I'll probably push your patches beginning of the week. Good work! And thanks for your patience.
Created attachment 7232 [details] private DMA buffers / mach64-drm - try 4 This is that last one, please push this. - drop an one-liner for a warning that was fixed otherwise (and the one-liner would introduce the warning back) - add an one-liner for a positive return value (ENOMEM -> DRM_ERR(ENOMEM)) - move the drm_vm.c patch a few lines above so that drm_mmap_dma() does things in the same order as drm_mmap()
(In reply to comment #32) > I've gone over your latest patches. They look good to me. I applied them to my > local git trees and my Mesa working copy and tested them briefly on my amd64 > with a PCI mach64. They seem to work nicely. I ran into some unrelated problems > with the latest DRM that I want to get resolved first before pushing your > patches upstream. Wrt mach64 IRQ, my card has a jumper on-board for enabling IRQ. Could this be the case with yours ? > The latest Mesa seems to have some problems too, regarding > texture management. However, I saw those before applying your Mesa patch, too. > I ported the DRI mach64 driver over to the common infrastructure for texture memory management in bug #7260. Does this help with the problems you saw ? > I'll probably push your patches beginning of the week. Great! Thanks for your time and feedback.
I saw that you have GIT commit access by now, at least to Xorg. Do you have write access to DRM and Mesa as well? In that case, do you want to push your changes yourself?
(In reply to comment #35) > I saw that you have GIT commit access by now, at least to Xorg. Do you have > write access to DRM and Mesa as well? In that case, do you want to push your > changes yourself? He does now.
(In reply to comment #36) > (In reply to comment #35) > > I saw that you have GIT commit access by now, at least to Xorg. Do you have > > write access to DRM and Mesa as well? In that case, do you want to push your > > changes yourself? > > He does now. I can definately handle the xorg and drm GIT parts. Is it ok for you to commit to Mesa CVS ? However strange, I am much more confortable with git than with cvs.
(In reply to comment #37) > (In reply to comment #36) > > (In reply to comment #35) > > > I saw that you have GIT commit access by now, at least to Xorg. Do you have > > > write access to DRM and Mesa as well? In that case, do you want to push your > > > changes yourself? > > > > He does now. > > I can definately handle the xorg and drm GIT parts. Is it ok for you to commit > to Mesa CVS ? However strange, I am much more confortable with git than with cvs. > Ah, the new generation. ;-) Sure. I'll watch the commit mailing lists and commit once you're done.
I saw you committed your DRM and DDX changes. I just committed your Mesa changes too. Moving to Fixed.
With these changes can we finally turn on mach64 DRI support by default in the ati DDX?
(In reply to comment #40) > With these changes can we finally turn on mach64 DRI support by default in the > ati DDX? I thought it was. I didn't need to do anything special to build it with DRI support. I guess it was just never enabled on standard distros because the default kernels didn't have a DRM driver.
(In reply to comment #41) > (In reply to comment #40) > > With these changes can we finally turn on mach64 DRI support by default in the > > ati DDX? > > I thought it was. I didn't need to do anything special to build it with DRI > support. I guess it was just never enabled on standard distros because the > default kernels didn't have a DRM driver. I guess the question is if the mach64 DRM is secure now. Personally, I have not done security auditing, this bug report is about fixing the last known issue in securing the driver. So I guess the status is "pending last serious security auditing".
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.