src/gallium/drivers/llvmpipe/lp_test_main.c:random_elem() seems to have the logic reversed in its handling of unsigned values:
if(rand() & 1)
value = -value;
This code only generates negative numbers for *unsigned* values.
write_elem() then has code to fix this up:
if(!type.sign && value < 0.0)
value = 0.0;
In practice, the unit tests only cover half the possible range of unsigned values, and the half of the elements they generate they use are all 0.
I think for "normal" unsigned type it should still cover the whole range because of the "value += (double)(mask & rand());" line above? So the numbers covered whole uint range before.
This line is weird though indeed and you're quite right that it will make all numbers negative just to get them clamped to zero later.
And float range is only from 0 to 2.0f?
Honestly I don't quite understand what the code is trying to do.
CC Jose as he wrote it.
It's been a long time, but I think Tom's right. This was a thinko, and my intent was the opposite.
> And float range is only from 0 to 2.0f?
That's correct. That's sufficient to exercise the clamping.
Remember that if we generated a range between 0 and a VERY_LARGE_NUMBER, then only 1/VERY_LARGE_NUMBER of the cases would actually be in the most interesting range of 0..1
(In reply to Jose Fonseca from comment #2)
> It's been a long time, but I think Tom's right. This was a thinko, and my
> intent was the opposite.
Ok that makes sense. With this code only reversing sign with signed type, we'd still not test full unsigned range though for uint32 types (but not smaller types), looking at this again.
> > And float range is only from 0 to 2.0f?
> That's correct. That's sufficient to exercise the clamping.
> Remember that if we generated a range between 0 and a VERY_LARGE_NUMBER,
> then only 1/VERY_LARGE_NUMBER of the cases would actually be in the most
> interesting range of 0..1
Ok, I think the problem is we don't know what we're going to convert to. If we actually want to test float->int then we'd definitely want very large numbers (in fact we'd wanted numbers which overflow the target type too) - values between 0..2 aren't particularly interesting in this case. But for converting to normalized this is right.
This code of course also does cases which are not really interesting and probably not hit in practice, like unorm->unsigned (everything is 0 or 1 after conversion).
Created attachment 139140 [details] [review]
patch to fix bogus random numbers
Ok this patch should fix it.
Non-normalized signed ints had completely bogus ref values, which I fixed as well. (As a side note, for things like si32x4 -> si16x8 or the unsigned equivalent, the probability that we hit something which _isn't_ clamped is very low - for si32x4 -> si8x16 it's not even in part-per-million range.)
But we can't apply it until we fix the remaining failures. They don't actually look too bad, all involving signed normalized numbers - all look like rounding issues, we expect a max error of 1 but can get an error of 2. Might be due to the snorm conversion formulas we (have to) use even as indeed it looks like we get these errors only for negative numbers < -0.5 or so, but I'm not entirely sure what happens.
While it's not perfect, fixed by cf3fb42fb5eb6130693a4be0a7b5ea06b184ce2d.