Summary: | GLSL sine and cosine of argument larger than 4096*pi give wrong results | ||
---|---|---|---|
Product: | Mesa | Reporter: | Ruslan Kabatsayev <b7.10110111> |
Component: | Drivers/DRI/i965 | Assignee: | Ian Romanick <idr> |
Status: | RESOLVED DUPLICATE | QA Contact: | Intel 3D Bugs Mailing List <intel-3d-bugs> |
Severity: | normal | ||
Priority: | medium | ||
Version: | 10.5 | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: | Plot of per-channel values in a scanline of the resulting image |
Tested in Windows on a similar HD Graphics 4600 device, and there the result is correct, as with software renderer on Linux. So seems it's not the chip which doesn't work correctly, but the driver/mesa. (In reply to Ruslan Kabatsayev from comment #1) > Tested in Windows on a similar HD Graphics 4600 device, and there the result > is correct, as with software renderer on Linux. So seems it's not the chip > which doesn't work correctly, but the driver/mesa. We just use the sin/cos instructions directly. I believe the instructions lose accuracy at large values, and that the Windows driver emits some instructions to do range reduction itself. Is this just a corner case you noticed, or something that you really want to work? (In reply to Matt Turner from comment #2) > We just use the sin/cos instructions directly. Hm, indeed, just tested with an equivalent ARBfp shader, with the same results. > Is this just a corner case you noticed, or something that you really want to > work? I was using GLSL to compute scattering of a 2D wave and render its density plot. And while downscaling the image I noticed that the results start looking strange starting from some distance to the scatterer, and this only reproduced on intel. And only then I tracked this to the problem with the sin/cos functions implementation. So I'd like this to work indeed, it was not just some test where I noticed the problem. But OTOH, the GLSL spec (I looked at 1.20) doesn't say (AFAIK) anything about accuracy requirements for built-in functions, nor does it mention range for which the results must be usable. So I'm not sure what's best for Mesa. If you feel this would make the GLSL implementation noticeably slower in general, maybe a GLSL pragma like "mesa_make_functions_correct" to enable fix like in Windows would be a good solution. Or it might be some special setting in ~/.drirc. This is a duplicate of bug #89634. I'm going to post some suggestions there. *** This bug has been marked as a duplicate of bug 89634 *** |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
Created attachment 116915 [details] Plot of per-channel values in a scanline of the resulting image The following fragment shader gives me wrong results for red and blue channels, while the green channel can taken as a reference: #version 120 float SinGood(float x) { if(x>12866) x=mod(x,6.283185307); return sin(x); } void main() { const float pi=3.14159265; gl_FragColor=vec4(0.5+0.5*sin(3.14159265/256*gl_FragCoord.x+4096*pi), 0.5+0.5*SinGood(3.14159265/256*gl_FragCoord.x+4096*pi), 0.5+0.5*sin(3.14159265/256*gl_FragCoord.x+2*4096*pi), 1); } I've tested on Intel Corporation Xeon E3-1200 v3 Processor Integrated Graphics Controller (rev 06), as well as on HD Graphics 4600 on Kubuntu 14.04 and LFS. If I set LIBGL_ALWAYS_SOFTWARE=1, then I get all three channels with identical results (visually in the resulting image and on the channels plot), i.e. the problem only appears when HW acceleration is enabled.