The radeon driver's RENDER acceleration code allocates offscreen memory using xf86AllocateOffscreenLinear(). However, given the fact that the "size" argument to that call is supposed to be in units of "pixels" (and thus depending on the current screen's depth), it does this the wrong way: Currently, this looks like this: tex_bytepp = PICT_FORMAT_BPP(format) >> 3; [...] dst_pitch = (width * tex_bytepp + 31) & ~31; size = dst_pitch * height; AllocateLinear(size); This is wrong because the result is the size of the texture in bytes, not screen pixels. The "size" has to be calculated like this (where as bpp = pScrn->bitsPerPixel >> 3): size = ((dst_pitch + bpp - 1) / bpp) * height;
*** Bug 1196 has been marked as a duplicate of this bug. ***
Has this been fixed in CVS? The code now looks like this: tex_bytepp = PICT_FORMAT_BPP(format) >> 3; dst_pitch = (width * tex_bytepp + 63) & ~63; size = dst_pitch * height; AllocateLinear(pScrn, size) --(sizeNeeded=size)--> xf86AllocateOffscreenLinear(pScrn->pScreen, sizeNeeded, 32, NULL, RemoveLinear, info);
Created attachment 3737 [details] [review] Proposed fix. Please review and/or test with XAA. Look for RENDER regressions (or possibly even improvements :).
Comment on attachment 3737 [details] [review] Proposed fix. looks obviously correct to me.
Comment on attachment 3737 [details] [review] Proposed fix. approved, i'll check this in.
fixed, thanks
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.