Summary: | Memory corruption when uploading DXT5 cubemap faces | ||
---|---|---|---|
Product: | Mesa | Reporter: | Daniel Evans <devans> |
Component: | Drivers/DRI/i965 | Assignee: | Intel 3D Bugs Mailing List <intel-3d-bugs> |
Status: | RESOLVED FIXED | QA Contact: | Intel 3D Bugs Mailing List <intel-3d-bugs> |
Severity: | normal | ||
Priority: | medium | CC: | jason |
Version: | git | ||
Hardware: | x86-64 (AMD64) | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: | trace that shows the issue |
I looked into this a bit earlier today: The texture in question is 4096x4096x6, Y-tiled, and DXT5. With a 4x4 block size and 16 bytes per block, this gives us a pitch of 16384. The size of a single layer is 67108864. This is larger than brw->max_gtt_map_object_size, so we think we can't map it in the GTT (using a fence for detiling). This makes intel_map_texture_image select the BLT based option, even though we normally wouldn't (for several reasons). However, our blitter code can't handle compressed data at all. It doesn't realize there are blocks involved, and assumes that it's a 4096x4096 image with a pitch of 16384, and so the buffer must be woefully undersized, and asserts. (Without asserts it probably reads/writes wildly out of bounds data.) There are many solutions to this: 1. Fall back to untiled for compressed textures that are too big to blit. ("Better to fall off a performance cliff than hit a spike trap.") Easy to implement, and backport to stable releases, but awful. Need to get the math right to avoid sending more things than necessary off the performance cliff. 2. Take Chris Wilson's "unlimited GTT map!" patch, which apparently exploits a Kernel 4.9 feature to eliminate the need for the check. I'm not sure how this works yet. It would be nice to solve it without requiring Kernel 4.9. 3. Teach the BLT code about compressed textures. Doable. Probably backportable. 4. Use BLORP for the map-blit path. Jason has patches to make it able to do dumb copies by lying about formats. We could use this for MapTextureImage. I'm thinking we want to do #3 (fix BLT, and backport), #2 (make CPU mapping always work), and longer term #4 (stop using the BLT). Latest master doesn't crash anymore but I think it still fails to do the right thing. I've sent two patches to the list: https://lists.freedesktop.org/archives/mesa-dev/2016-December/137523.html https://lists.freedesktop.org/archives/mesa-dev/2016-December/137524.html Would you mind trying it with latest master plus those two patches and letting me know if this solves your problem? I believe this is fixed on master now. Re-open if it's not. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
Created attachment 126014 [details] trace that shows the issue Attached trace shows corruption on Iris Pro 5200 in the the cube face mipmaps uploaded with glCompressedTexImage2D, but not in the cube face mipmaps uploaded using glTexImage2D. Parts of system UI are also corrupted randomly when replaying. Corruption isn't seen on an R9 Nano. Both textures should be almost identical, the draw @ call 1389 will take the difference. The expected output is a grey screen. This is seen on the R9 Nano, but on the Iris Pro 5200 there is a clear difference in the two textures. Fails on 11.2, and git 4f3f8bb59dd98e39c363fe47a55a7f97e7df9f4b via imirkin_ in #dri-devel: Triggers an assertion in intel_blit.c:589: intelEmitCopyBlit: Assertion `src_offset + (src_y + h - 1) * abs(src_pitch) + (w * cpp) <= src_buffer->size' failed.