Summary: | r300g: GL_COMPRESSED_RED_RGTC1 / ATI1N support broken | ||
---|---|---|---|
Product: | Mesa | Reporter: | Stefan Dösinger <stefandoesinger> |
Component: | Drivers/Gallium/r300 | Assignee: | Default DRI bug account <dri-devel> |
Status: | RESOLVED FIXED | QA Contact: | Default DRI bug account <dri-devel> |
Severity: | normal | ||
Priority: | medium | ||
Version: | git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: |
Screenshot
Shader used to read the texture Precision fix |
Description
Stefan Dösinger
2015-02-15 16:28:56 UTC
The texture filtering seems to operate at lower precision. Is that what you're seeing? Created attachment 113786 [details]
Screenshot
ATI1N seems to return random garbage here. Attached is a screenshot from our 3DC tests. The right half is ATI2N, it works OK. The left half is ATI1N, it is filled with random garbage. The expected result is a solid color on the left half, with R = 0x7f (plus or minus 1). We're not too particular about the result of G and B on Windows. On Wine we set G = R and B = R.
I've just tested R580 and RGTC1 works very well according to piglit. What happens if you use the default swizzle? Created attachment 113841 [details]
Shader used to read the texture
Indeed changing the swizzle fixes the random output. I have attached the shader we use to sample the texture. We don't have a swizzle on the texture2D statement, but we do swizzle the output variable, and apparently the optimizer merges that.
If I use the RGBA values returned by the texture sampling directly I get a solid color as expected. However, the value is off quite a bit: Instead of 0x7f I get 0x6c.
The texture data we use is this:
static const char ati1n_data[] =
{
/* A 4x4 texture with the color component at 50%. */
0x7f, 0x7f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
};
(In reply to Stefan Dösinger from comment #4) > Created attachment 113841 [details] > Shader used to read the texture > > Indeed changing the swizzle fixes the random output. I have attached the > shader we use to sample the texture. We don't have a swizzle on the > texture2D statement, but we do swizzle the output variable, and apparently > the optimizer merges that. So it's a compiler bug. > > If I use the RGBA values returned by the texture sampling directly I get a > solid color as expected. However, the value is off quite a bit: Instead of > 0x7f I get 0x6c. > > The texture data we use is this: > > static const char ati1n_data[] = > { > /* A 4x4 texture with the color component at 50%. */ > 0x7f, 0x7f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, > }; That's expected. ATI1N operates at lower precision. I'm afraid I can't change that. (In reply to Marek Olšák from comment #5) > So it's a compiler bug. In which sense? Is there something in the spec that tells me I should expect garbage when I use texture2D().xxxx? Or is this something the driver tells the compiler and the compiler is supposed to generate a swizzle-free texture2D statement? > > static const char ati1n_data[] = > > { > > /* A 4x4 texture with the color component at 50%. */ > > 0x7f, 0x7f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, > > }; > > That's expected. ATI1N operates at lower precision. I'm afraid I can't > change that. Windows gives me a proper value (0x7f, which is pretty close to 0x80). Do you have any idea how it does that? I can try to find a scheme behind the imprecision if it helps you. There may be something like an off by one error that the driver can account for. A compiler bug is a compiler bug. Sorry, I don't know how to make that statement clearer. If you run the rgtc-teximage-01 piglit test, you'll see that it looks like the red channel only has 4 bits. I ran some more tests, it seems that the format is operating at 3 bits precision. I can produce 8 different output colors. Otherwise it seems to follow the spec, so I don't think we're accidentally feeding the data into an R3G3B2 texture. On Windows the format operates at the expected precision - I can get any output data from 0x00 to 0xff. I skimmed the GPU docs for clues what may cause this behavior but could not find anything. The things I checked were enabling / disabling filtering, make sure texture address handling follows conditional NP2 texture rules, disabling alpha blending. For the sake of testing I also tried disabling FBOs and all our sRGB code. I'm also quite sure that all 8 bits of red0 and red1 input arrive on the GPU. I tested that by setting the code of each texel to 7 and then testing red0=1, red1=0 and red0=0 and red1=1. In the former case this gives the result 0 (interpolation between red0 and red1), in the latter case this gives 0xfc (MAXRED). The same works for the input values 0x80 and 0x7f. I tested interpolation codes (e.g. red0=0x2, red1=0xa2, code 2 for each texel, then try to reduce red0 or red1 by 1), and it seems that the input into the interpolation is OK, but either the interpolation happens at a lower precision or the output is clamped afterwards. The reason why I am suspicious about the 3 bits precision and ATI1N is that according to the GPU register docs TX_FMT_ATI1N is separated from TX_FMT_3_3_2 only by TX_FORMAT2.TXFORMAT_MSB. Is it possible that some code doesn't look at TXFORMAT_MSB, thinks it sees TX_FMT_3_3_2 and sets up some other part of the GPU to expect 3 bits of red data? I'm skimming the code for something like this, so far I haven't found anything. Created attachment 113921 [details] [review] Precision fix The attached patch seems to fix the precision problem for me. It seems to make sense given the surrounding code, but I have no real clue what's special about swizzles for these formats. Fixed by f710b99071fe4e3c2ee88cdcb6bb5c10298e014. Closing. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.