Created attachment 116915 [details]
Plot of per-channel values in a scanline of the resulting image
The following fragment shader gives me wrong results for red and blue channels, while the green channel can taken as a reference:
float SinGood(float x)
const float pi=3.14159265;
I've tested on Intel Corporation Xeon E3-1200 v3 Processor Integrated Graphics Controller (rev 06), as well as on HD Graphics 4600 on Kubuntu 14.04 and LFS.
If I set LIBGL_ALWAYS_SOFTWARE=1, then I get all three channels with identical results (visually in the resulting image and on the channels plot), i.e. the problem only appears when HW acceleration is enabled.
Tested in Windows on a similar HD Graphics 4600 device, and there the result is correct, as with software renderer on Linux. So seems it's not the chip which doesn't work correctly, but the driver/mesa.
(In reply to Ruslan Kabatsayev from comment #1)
> Tested in Windows on a similar HD Graphics 4600 device, and there the result
> is correct, as with software renderer on Linux. So seems it's not the chip
> which doesn't work correctly, but the driver/mesa.
We just use the sin/cos instructions directly. I believe the instructions lose accuracy at large values, and that the Windows driver emits some instructions to do range reduction itself.
Is this just a corner case you noticed, or something that you really want to work?
(In reply to Matt Turner from comment #2)
> We just use the sin/cos instructions directly.
Hm, indeed, just tested with an equivalent ARBfp shader, with the same results.
> Is this just a corner case you noticed, or something that you really want to
I was using GLSL to compute scattering of a 2D wave and render its density plot. And while downscaling the image I noticed that the results start looking strange starting from some distance to the scatterer, and this only reproduced on intel. And only then I tracked this to the problem with the sin/cos functions implementation.
So I'd like this to work indeed, it was not just some test where I noticed the problem. But OTOH, the GLSL spec (I looked at 1.20) doesn't say (AFAIK) anything about accuracy requirements for built-in functions, nor does it mention range for which the results must be usable. So I'm not sure what's best for Mesa. If you feel this would make the GLSL implementation noticeably slower in general, maybe a GLSL pragma like "mesa_make_functions_correct" to enable fix like in Windows would be a good solution. Or it might be some special setting in ~/.drirc.
This is a duplicate of bug #89634. I'm going to post some suggestions there.
*** This bug has been marked as a duplicate of bug 89634 ***