When using Mesa 10.3.0 and 10.3.1 I get an assertion fail in 'src/gallium/drivers/llvmpipe/lp_setup_tri.c:495' I am rendering a sphere in software mode (great for debugging, with MESA_DEBUG=1) using Gallium 0.4 on llvmpipe (LLVM 3.4) and 3.3 Core Profile. Following is the code I am using to generate and render my sphere: // m_numLon = 36; // m_numLat = 18; // m_radius = 0.5; const unsigned int RESTART_INDEX = 0xFFFF; glPrimitiveRestartIndex( RESTART_INDEX ); float deltaLon = TWOPI / m_numLon; float deltaLat = PI / m_numLat; float lonTexCoord, latTexCoord; float r, x, y, z; GlVec3 vertex, normal; // The North-Pole vertex = GlVec3( 0.0, 0.0, m_radius ); normal = vertex.normalized(); lonTexCoord = 0.0; latTexCoord = 0.0; m_vertices << vertex << normal << GlVec2( lonTexCoord, latTexCoord ); numVerts++; // The sphere is composed of numLon triangle strips for ( unsigned int lonNum = 0; lonNum < m_numLon; lonNum++ ) { for ( unsigned int latNum = 1; latNum < m_numLat; latNum++ ) { lonTexCoord = ((float) lonNum) / ((float) m_numLon); latTexCoord = ((float) latNum) / ((float) m_numLat); r = m_radius * ::sin( latNum * deltaLat ); z = m_radius * ::cos( latNum * deltaLat ); y = r * ::sin( lonNum * deltaLon ); x = r * ::cos( lonNum * deltaLon ); vertex = GlVec3( x, y, z ); normal = vertex.normalized(); m_vertices << vertex << normal << GlVec2( lonTexCoord, latTexCoord ); numVerts++; } } // The South-Pole vertex = GlVec3( 0.0, 0.0, -m_radius ); normal = vertex.normalized(); lonTexCoord = 1.0; latTexCoord = 1.0; m_vertices << vertex << normal << GlVec2( lonTexCoord, latTexCoord ); numVerts++; // generate the indices for ( unsigned int lonNum = 0; lonNum < m_numLon; lonNum++ ) { // index to the north pole m_indices << 0; for ( unsigned int latNum = 1; latNum < m_numLat; latNum++ ) { m_indices << latNum + lonNum*(m_numLat-1); m_indices << latNum + ((lonNum+1) % m_numLon)*(m_numLat-1); } // index to the south pole m_indices << numVerts - 1; // signal the renderer that we are done with this primitive and it // needs to start a new triangle strip. m_indices << RESTART_INDEX; } // load vertex data vbo.bind(); glBufferData( vbo, numVerts*sizeof(float), m_vertices.data(), GL_STATIC_DRAW ); // load index data ibo.bind(); glBufferData(ibo, m_indices.count()*sizeof(unsigned int), m_indices.data(), GL_STATIC_DRAW); // render the data ... vao.bind(); vbo.bind(); int offset = 0; glVertexAttribPointer( vLoc, 3, GL_FLOAT, GL_TRUE, 8*sizeof(float), reinterpret_cast<const void* >(offset) ); glEnableVertexAttribArray( vLoc ); glDrawElements( GL_TRIANGLE_STRIP, 0, numVerts ); Using the same vertices and different index ordering I can draw using GL_LINE_STRIP and see that the generate vertices are indeed correct. Printing out the indices shows that they are also correct. If I revert to an older Mesa (such as 10.2.8) then I no longer get an assertion error. Instead every other triangle strip is culled as though it were facing the opposite direction (If I turn off culling, then the entire sphere is rendered).
Do you have a full test program or can you supply an apitrace capture? I suspect there's something wrong with restart index handling though it generally seems to work otherwise (as I side note, I'd advise against using 0xFFFF as a restart index with uint indices since there's hw out there which cannot deal with anything else than ~0 but it shouldn't matter for llvmpipe).
Thanks for the tip with '~0' I was unaware of that. I only used '0xffff' because that was what was used in the red book examples. I changed my code to use '~0' and now the latest version of the Mesa drivers (10.3.1) are matching the previous versions of Mesa. I am still seeing alternate triangle strips having their front facing definitions reversed. I will work on getting a free standing compilable code example as soon as I can. In the mean time I am attaching a screen shot and apitrace.
Created attachment 108520 [details] apitrace of rendered sphere
Created attachment 108521 [details] Screenshot of rendered sphere
(In reply to James Evans from comment #2) > Thanks for the tip with '~0' I was unaware of that. I only used '0xffff' > because that was what was used in the red book examples. I suspect they were using ushort indices (or they just didn't care since it should still work correctly just possibly the driver has to bend over backwards if the hw doesn't support arbitrary index). > I changed my code to use '~0' and now the latest version of the Mesa drivers > (10.3.1) are matching the previous versions of Mesa. > > I am still seeing alternate triangle strips having their front facing > definitions reversed. Hmm ok that would point to two separate bugs then actually... > I will work on getting a free standing compilable code example as soon as I > can. In the mean time I am attaching a screen shot and apitrace. I'll give the apitrace a look.
The trace crashes on NVIDIA GL driver. This is because you never call glEnable(PRIMITIVE_RESTART), so the driver interprets the index 0xffffffff literally, causing a buffer overflow. I also add to manually edit the trace and remove trailing \\ from one of the shaders, as NVIDIA GLSL compiler fails otherwise. AFAICS, you're getting undefined behavior because GL state is not being properly set.
I am so sorry for bugging you with this. You are absolutely correct. I completely missed the glEnable call. Adding that fixed everything. Thank you.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.