glDrawArrays can be used to render primitives stored inside a vbo. It takes three parameters: "mode", "first" and "count". It has worked correctly with the (free) ATI Radeon driver until a recent switch to Mesa 7.9. Now, when rendering a large 3d model (count>31658) with an offset (start>0) in a vbo, the program gets terminated with the following output on stderror:
"drmRadeonCmdBuffer: -22. Kernel failed to parse or rejected command stream. See dmesg for more info."
[drm:radeon_cs_parser_init] *ERROR* cs IB too big: 16386
[drm:r600_cs_legacy] *ERROR* Failed to initialize parser !
Which is followed by several lines about "name:???? freeing invalid memtype ??????-????????" for the application ("freeing invalid memtype" gets printed on earlier versions too, so it is probably not related).
My graphics card is an "hd 4670 AGP", cpu is i686, and so far I've noticed the problem on both archlinux (using the official repos) and debian gnu/linux (by installing libgl1-mesa-dri from experimental), and I can reproduce the bug reliably by running any application that uses VBO and glDrawArrays to render a large enough model (with an offset).
However, it does not occur when glDrawArrays is used for rendering array data sent directly from client (from ram instead of uploaded into a vbo in video memory). And it does not occur with the intel drivers, indicating that the bug may be in the radeon drivers (but not causing any problems until the switch to mesa 7.8).
It does occur with the mesa and radeon drivers from git (at 2010/12/25). And it does not matter if kms (dri2) is enabled or not. It is possible that other means of rendering primitives from a VBO with an offset can cause this problem too (although I have not experimented with other means of rendering).
Note that the problem does not occur when the "start" parameter of glDrawArrays is set to 0, or when "count" is 31658 or lower.
The easiest way of reproducing the bug is to take any simple vbo example/demo (that renders enough primitives) and modify it to not draw the first few primitives (such as triangles). For example:
Gets this problem if changing line 269 into:
"glDrawArrays( GL_TRIANGLES, 3, g_pMesh->m_nVertexCount-3);"
(which will render all triangles except for the first one)
Again, note that with "start" left at 0 (which is the original case in the example code) or "count" is changed to 31658 or lower, this problem does not occur. Also, by uncommenting line 36 ("#define NO_VBOS"), the program will use glDrawArrays to send data directly (instead of using a vbo) which will also work as it should.
Created attachment 41554 [details]
NeHe lesson 45 source (SDL port), modified to demonstrate bug
Created attachment 42437 [details] [review]
try to use base_vtx as index offset for auto_index
Can you try if the attached patch works
(In reply to comment #2)
> Created an attachment (id=42437) [details]
> try to use base_vtx as index offset for auto_index
> Can you try if the attached patch works
Perfect, this solves the bug!
Since the patch makes the instruction buffer stay at the same reliable size, it might even solve (any) similar bugs. (it also solves some rendering artefacts noticed but not mentioned)
The patch which solves the problem has not been applied to the mesa tree.
Created attachment 48270 [details]
(no) changes between patched and unpatched mesa reported by piglit
No changes/regressions detected by piglit when running "r600.tests" and comparing the results between patched and unpatched mesa (from git).
Both mesa builds were compiled to use "mesa classic" instead of gallium (the patch only affects mesa classic, not gallium).
If anyone could merge the patch I would appreciate it.
The patch was merged by Andre Maasikas.