In framebuffer.c line 66 the _DepthMaxF member of the frame buffer is calculated for a 32 bit depth by converting the integer 0xffffffff to float. The correct answer is 4294967295.0 However on MSVC (version 6) with float storage this is rounded to 4.29497e9. Subsequently on clearing the depth buffer in the file s_depth.c line 1282 the _DepthMaxF is read, multiplied by ctx->Depth.Clear (which is 1.0 in my case) and then converted back to unsigned integer. The result is 0x00000000. Consequently the buffer is cleared to zero and all drawing involving depth testing fails the depth test and therefore no rendering takes place. This happens because of the rounding that occurred in the original setup of the _DepthMaxF member. I verified this by expanding the line 1282 of s_depth.c as follows: GLuint tmpClearValue1 = ((GLuint) ctx->Depth.Clear) * ctx->DrawBuffer- >_DepthMax; GLuint tmpClearValue2 = (GLuint) (ctx->Depth.clear * ctx->DrawBuffer- >_DepthMaxF); const GLuint clearValue = tmpClearValue2 == 0 && tmpClearValue1 > 0 ? tmpClearValue1 : tmpClearValue2; This checks for the overflow situation occurring and provides the correct value of 0xffffffff for the clear value in that situation. Rendering then correctly takes place.
Since ctx->Depth.Clear is almost always one, it's better to check for that value and use the integer depthMax value as-is. Use the float value otherwise. I've checked in this fix.
Mass version move, cvs -> git
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.