Created attachment 143274 [details] gdb session + generated shaders This dEQP test asserts, at least on nouveau, but I don't see what would cause it to be driver-specific. dEQP-GLES31.functional.ssbo.layout.random.all_per_block_buffers.18 This appears to happen because it doesn't understand the interface array index properly? This message is printed before the abort: ir_variable has maximum access out of bounds (1 vs 0) For some reason it thinks that the array length is 1 when it should be 2. I recall interfaces being treated rather differently than most things, so perhaps related.
I believe we split (maybe eliminate unused blocks?). Looks like the shader only access blockB[1] and not blockB[0] so I assume this has something to do with it. Probably not updating something correctly when doing so.
Looks like this is happening in link_uniform_blocks. The type gets updated: if (b->array != NULL && (b->type->without_array()->interface_packing == GLSL_INTERFACE_PACKING_PACKED)) { b->type = resize_block_array(b->type, b->array); b->var->type = b->type; } But no change to the corresponding max_data_access. However doing the naive thing here didn't help -- the shader fails. I didn't investigate why. I think create_buffer_blocks also needs to be involved somehow.
This test was broken in the dEQP suite recently for m32 i965 platforms by: 237916d03377a469e30dd36087738b069f83a19a Author: xinglong.zhu <xinglong.zhu@spreadtrum.com> Commit: Patrick Shi <patrick.shi@unisoc.com> CommitDate: Sat Jan 5 12:46:03 2019 +0000 gles31:es31fSSBOLayoutTests limit m_maxBlocks and m_maxBlockMembers for low ram device Memory requirement for cts CtsDeqpTestCases dEQP-GLES31.functional.ssbo.layout.random.all_shared_buffer#5 reduced from 230MB to 36MB Issue: 121364689 Test: run cts -m CtsDeqpTestCases -t dEQP-GLES31.functional.ssbo.layout.random.all_shared_buffer#5 Change-Id: I2a66263f041265e8697e4b270132b9889486ce4f I contacted the author and committer, but they had no way to verify on mesa. I'll forward to deqp-external-requests@google.com
(In reply to Mark Janes from comment #3) > This test was broken in the dEQP suite recently for m32 i965 platforms by: FWIW I don't think this is a test bug. The shaders used appear perfectly reasonable. It seems like the commit in question would have changed the generated shader a bit, which could be triggering the different behavior (seems likely, in fact). But ultimately this is a mesa issue, not a dEQP issue.
Hi, I managed to reproduce this issue using provided shader under KBL with latest mesa from git. This happens with 'BlockB' field: ir->data.max_array_access is 1 and ir->type->length is 1 too I am going to continue my investigation. Thanks, Andrii.
Created attachment 143288 [details] this simple program helps me to reproduce this issue. just share my simple reproducer) Run it in this way: simple_reproducer shader.comp
(In reply to Ilia Mirkin from comment #2) > Looks like this is happening in link_uniform_blocks. The type gets updated: > > if (b->array != NULL && > (b->type->without_array()->interface_packing == > GLSL_INTERFACE_PACKING_PACKED)) { > b->type = resize_block_array(b->type, b->array); > b->var->type = b->type; > } > > But no change to the corresponding max_data_access. However doing the naive > thing here didn't help -- the shader fails. I didn't investigate why. I > think create_buffer_blocks also needs to be involved somehow. Thanks a lot your message was very helpful to find the start point of investigation. Looks like this thing is helped: --- src/compiler/glsl/link_uniform_blocks.cpp | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/src/compiler/glsl/link_uniform_blocks.cpp b/src/compiler/glsl/link_uniform_blocks.cpp index 0b890586298..5197870a6d9 100644 --- a/src/compiler/glsl/link_uniform_blocks.cpp +++ b/src/compiler/glsl/link_uniform_blocks.cpp @@ -438,8 +438,15 @@ link_uniform_blocks(void *mem_ctx, if (b->array != NULL && (b->type->without_array()->interface_packing == GLSL_INTERFACE_PACKING_PACKED)) { + const int new_max_array_access = (int)(b->array->num_array_elements - 1); b->type = resize_block_array(b->type, b->array); b->var->type = b->type; + + assert(((int)b->array->array_elements[new_max_array_access] == + b->var->data.max_array_access || + b->var->data.max_array_access == -1) && "Is last index is proper"); + + b->var->data.max_array_access = new_max_array_access; } block_size.num_active_uniforms = 0; -- 2.17.1 Did you mean the same thing saying "naive thing" or this change helps?
(In reply to asimiklit from comment #7) > (In reply to Ilia Mirkin from comment #2) > > Looks like this is happening in link_uniform_blocks. The type gets updated: > > > > if (b->array != NULL && > > (b->type->without_array()->interface_packing == > > GLSL_INTERFACE_PACKING_PACKED)) { > > b->type = resize_block_array(b->type, b->array); > > b->var->type = b->type; > > } > > > > But no change to the corresponding max_data_access. However doing the naive > > thing here didn't help -- the shader fails. I didn't investigate why. I > > think create_buffer_blocks also needs to be involved somehow. > > Thanks a lot your message was very helpful to find the start point of > investigation. > > Looks like this thing is helped: > > --- > src/compiler/glsl/link_uniform_blocks.cpp | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/src/compiler/glsl/link_uniform_blocks.cpp > b/src/compiler/glsl/link_uniform_blocks.cpp > index 0b890586298..5197870a6d9 100644 > --- a/src/compiler/glsl/link_uniform_blocks.cpp > +++ b/src/compiler/glsl/link_uniform_blocks.cpp > @@ -438,8 +438,15 @@ link_uniform_blocks(void *mem_ctx, > if (b->array != NULL && > (b->type->without_array()->interface_packing == > GLSL_INTERFACE_PACKING_PACKED)) { > + const int new_max_array_access = > (int)(b->array->num_array_elements - 1); > b->type = resize_block_array(b->type, b->array); > b->var->type = b->type; > + > + assert(((int)b->array->array_elements[new_max_array_access] == > + b->var->data.max_array_access || > + b->var->data.max_array_access == -1) && "Is last index is > proper"); > + > + b->var->data.max_array_access = new_max_array_access; > } > > block_size.num_active_uniforms = 0; > -- > 2.17.1 > > Did you mean the same thing saying "naive thing" or this change helps? Well, as I recall the thing I tried was just saying max_array_access = 0 or -1 unconditionally. It could also have been due to some secondary issue with nouveau that this failed -- like I said, I didn't investigate. I'll have a closer look tonight. You're basically setting b->var->data.max_array_access = b->type->length - 1 right?
(In reply to Ilia Mirkin from comment #8) > (In reply to asimiklit from comment #7) > > (In reply to Ilia Mirkin from comment #2) > > > Looks like this is happening in link_uniform_blocks. The type gets updated: > > > > > > if (b->array != NULL && > > > (b->type->without_array()->interface_packing == > > > GLSL_INTERFACE_PACKING_PACKED)) { > > > b->type = resize_block_array(b->type, b->array); > > > b->var->type = b->type; > > > } > > > > > > But no change to the corresponding max_data_access. However doing the naive > > > thing here didn't help -- the shader fails. I didn't investigate why. I > > > think create_buffer_blocks also needs to be involved somehow. > > > > Thanks a lot your message was very helpful to find the start point of > > investigation. > > > > Looks like this thing is helped: > > > > --- > > src/compiler/glsl/link_uniform_blocks.cpp | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/src/compiler/glsl/link_uniform_blocks.cpp > > b/src/compiler/glsl/link_uniform_blocks.cpp > > index 0b890586298..5197870a6d9 100644 > > --- a/src/compiler/glsl/link_uniform_blocks.cpp > > +++ b/src/compiler/glsl/link_uniform_blocks.cpp > > @@ -438,8 +438,15 @@ link_uniform_blocks(void *mem_ctx, > > if (b->array != NULL && > > (b->type->without_array()->interface_packing == > > GLSL_INTERFACE_PACKING_PACKED)) { > > + const int new_max_array_access = > > (int)(b->array->num_array_elements - 1); > > b->type = resize_block_array(b->type, b->array); > > b->var->type = b->type; > > + > > + assert(((int)b->array->array_elements[new_max_array_access] == > > + b->var->data.max_array_access || > > + b->var->data.max_array_access == -1) && "Is last index is > > proper"); > > + > > + b->var->data.max_array_access = new_max_array_access; > > } > > > > block_size.num_active_uniforms = 0; > > -- > > 2.17.1 > > > > Did you mean the same thing saying "naive thing" or this change helps? > > Well, as I recall the thing I tried was just saying max_array_access = 0 or > -1 unconditionally. It could also have been due to some secondary issue with > nouveau that this failed -- like I said, I didn't investigate. I'll have a > closer look tonight. Got it. > > You're basically setting > > b->var->data.max_array_access = b->type->length - 1 > > right? Yes. I guess that the optimized array size - 1 is a new max_array_access. But in case if optimization algorithm has some unexpected behavior and could left some unused elements in array I guess the search will be required: for (int i = 0; i < b->array->num_array_elements; i++) { if(b->var->data.max_array_access == b->array->array_elements[i]) { b->var->data.max_array_access = i; break; } }(In reply to Ilia Mirkin from comment #8) > (In reply to asimiklit from comment #7) > > (In reply to Ilia Mirkin from comment #2) > > > Looks like this is happening in link_uniform_blocks. The type gets updated: > > > > > > if (b->array != NULL && > > > (b->type->without_array()->interface_packing == > > > GLSL_INTERFACE_PACKING_PACKED)) { > > > b->type = resize_block_array(b->type, b->array); > > > b->var->type = b->type; > > > } > > > > > > But no change to the corresponding max_data_access. However doing the naive > > > thing here didn't help -- the shader fails. I didn't investigate why. I > > > think create_buffer_blocks also needs to be involved somehow. > > > > Thanks a lot your message was very helpful to find the start point of > > investigation. > > > > Looks like this thing is helped: > > > > --- > > src/compiler/glsl/link_uniform_blocks.cpp | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/src/compiler/glsl/link_uniform_blocks.cpp > > b/src/compiler/glsl/link_uniform_blocks.cpp > > index 0b890586298..5197870a6d9 100644 > > --- a/src/compiler/glsl/link_uniform_blocks.cpp > > +++ b/src/compiler/glsl/link_uniform_blocks.cpp > > @@ -438,8 +438,15 @@ link_uniform_blocks(void *mem_ctx, > > if (b->array != NULL && > > (b->type->without_array()->interface_packing == > > GLSL_INTERFACE_PACKING_PACKED)) { > > + const int new_max_array_access = > > (int)(b->array->num_array_elements - 1); > > b->type = resize_block_array(b->type, b->array); > > b->var->type = b->type; > > + > > + assert(((int)b->array->array_elements[new_max_array_access] == > > + b->var->data.max_array_access || > > + b->var->data.max_array_access == -1) && "Is last index is > > proper"); > > + > > + b->var->data.max_array_access = new_max_array_access; > > } > > > > block_size.num_active_uniforms = 0; > > -- > > 2.17.1 > > > > Did you mean the same thing saying "naive thing" or this change helps? > > Well, as I recall the thing I tried was just saying max_array_access = 0 or > -1 unconditionally. It could also have been due to some secondary issue with > nouveau that this failed -- like I said, I didn't investigate. I'll have a > closer look tonight. Got it. > > You're basically setting > > b->var->data.max_array_access = b->type->length - 1 > > right? Yes. I guess that the optimized array size - 1 is a new max_array_access. But in case if optimization algorithm has some "unexpected behavior" and could left some unused elements at the end of array for some reason I guess the some search may be required: for (int i = 0; i < b->array->num_array_elements; i++) { if(b->var->data.max_array_access == b->array->array_elements[i]) { b->var->data.max_array_access = i; break; } }
Does your patch make the test pass on i965? If so, I'll have to figure out where in st/mesa or nouveau it dies.
The i965 tests is in progress and result will be appear here (in an hour I think): https://mesa-ci.01.org/global_logic/builds/ It will be "Build # 70" The commit which is under test: https://gitlab.freedesktop.org/GL/mesa/commit/526a9fee0e40a450bc71092b27a3b89cbbf93fd7
(In reply to asimiklit from comment #11) > https://mesa-ci.01.org/global_logic/builds/ > > It will be "Build # 70" > With this build, i965 reports test failure instead of assertion (the same status as for m32). Before the dEQP change, this test passed. Since the status has been different for 32-bit binaries, I'm rebuilding with that configuration (see build # 71)
(In reply to Mark Janes from comment #12) > (In reply to asimiklit from comment #11) > > https://mesa-ci.01.org/global_logic/builds/ > > > > It will be "Build # 70" > > > > With this build, i965 reports test failure instead of assertion (the same > status as for m32). Before the dEQP change, this test passed. > > Since the status has been different for 32-bit binaries, I'm rebuilding with > that configuration (see build # 71) Thank you) But unfortunately the tests failed so guess I didn't consider something. I am going to continue my investigation)
(In reply to andrii simiklit from comment #13) > Thank you) But unfortunately the tests failed so guess I didn't consider > something. I am going to continue my investigation) Did the test actually pass for you? Or just not crash? When I made my "naive" change, it fixed the crash, but the test continued to fail.
(In reply to Ilia Mirkin from comment #14) > Did the test actually pass for you? Or just not crash? When I made my > "naive" change, it fixed the crash, but the test continued to fail. On i965, tests did not crash, but they continued to fail.
(In reply to Ilia Mirkin from comment #14) > (In reply to andrii simiklit from comment #13) > > Thank you) But unfortunately the tests failed so guess I didn't consider > > something. I am going to continue my investigation) > > Did the test actually pass for you? Or just not crash? When I made my > "naive" change, it fixed the crash, but the test continued to fail. Sorry for misudestanding (In reply to Ilia Mirkin from comment #14) > (In reply to andrii simiklit from comment #13) > > Thank you) But unfortunately the tests failed so guess I didn't consider > > something. I am going to continue my investigation) > > Did the test actually pass for you? Or just not crash? When I made my > "naive" change, it fixed the crash, but the test continued to fail. Oops I thought you ask just about crash. Sorry for confusion. So I have the same results. I will try to understand why test fails today.
(In reply to asimiklit from comment #6) > Created attachment 143288 [details] > this simple program helps me to reproduce this issue. > > just share my simple reproducer) > > Run it in this way: > > simple_reproducer shader.comp It seems like this could be made into a piglit test that could be compiled with glslparsertest. If you haven't already submitted such a test, please do. :)
(In reply to asimiklit from comment #7) > (In reply to Ilia Mirkin from comment #2) > > Looks like this is happening in link_uniform_blocks. The type gets updated: > > > > if (b->array != NULL && > > (b->type->without_array()->interface_packing == > > GLSL_INTERFACE_PACKING_PACKED)) { > > b->type = resize_block_array(b->type, b->array); > > b->var->type = b->type; > > } > > > > But no change to the corresponding max_data_access. However doing the naive > > thing here didn't help -- the shader fails. I didn't investigate why. I > > think create_buffer_blocks also needs to be involved somehow. > > Thanks a lot your message was very helpful to find the start point of > investigation. > > Looks like this thing is helped: > > --- > src/compiler/glsl/link_uniform_blocks.cpp | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/src/compiler/glsl/link_uniform_blocks.cpp > b/src/compiler/glsl/link_uniform_blocks.cpp > index 0b890586298..5197870a6d9 100644 > --- a/src/compiler/glsl/link_uniform_blocks.cpp > +++ b/src/compiler/glsl/link_uniform_blocks.cpp > @@ -438,8 +438,15 @@ link_uniform_blocks(void *mem_ctx, > if (b->array != NULL && > (b->type->without_array()->interface_packing == > GLSL_INTERFACE_PACKING_PACKED)) { > + const int new_max_array_access = > (int)(b->array->num_array_elements - 1); > b->type = resize_block_array(b->type, b->array); > b->var->type = b->type; > + > + assert(((int)b->array->array_elements[new_max_array_access] == > + b->var->data.max_array_access || > + b->var->data.max_array_access == -1) && "Is last index is > proper"); > + > + b->var->data.max_array_access = new_max_array_access; > } > > block_size.num_active_uniforms = 0; I don't think this has any chance of being correct. max_array_access is set based on actual, known accesses to the array. Changing that arbitrarily is just lying to the rest of the compiler. Nothing good can come from that. Since the type of the array is now wrong, my guess is that the test fails (instead of crashes) because the part of the array that has the data the shader wants is optimized away. How does the mismatch between max_array_access and the type occur in the first place? My guess is that this is the source of the bug. They *must* agree at some point, or an earlier part of the compile would have generated an error (as required by the spec).
(In reply to Ian Romanick from comment #18) > (In reply to asimiklit from comment #7) > > (In reply to Ilia Mirkin from comment #2) > > > Looks like this is happening in link_uniform_blocks. The type gets updated: > > > > > > if (b->array != NULL && > > > (b->type->without_array()->interface_packing == > > > GLSL_INTERFACE_PACKING_PACKED)) { > > > b->type = resize_block_array(b->type, b->array); > > > b->var->type = b->type; > > > } > > > > > > But no change to the corresponding max_data_access. However doing the naive > > > thing here didn't help -- the shader fails. I didn't investigate why. I > > > think create_buffer_blocks also needs to be involved somehow. > > > > Thanks a lot your message was very helpful to find the start point of > > investigation. > > > > Looks like this thing is helped: > > > > --- > > src/compiler/glsl/link_uniform_blocks.cpp | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/src/compiler/glsl/link_uniform_blocks.cpp > > b/src/compiler/glsl/link_uniform_blocks.cpp > > index 0b890586298..5197870a6d9 100644 > > --- a/src/compiler/glsl/link_uniform_blocks.cpp > > +++ b/src/compiler/glsl/link_uniform_blocks.cpp > > @@ -438,8 +438,15 @@ link_uniform_blocks(void *mem_ctx, > > if (b->array != NULL && > > (b->type->without_array()->interface_packing == > > GLSL_INTERFACE_PACKING_PACKED)) { > > + const int new_max_array_access = > > (int)(b->array->num_array_elements - 1); > > b->type = resize_block_array(b->type, b->array); > > b->var->type = b->type; > > + > > + assert(((int)b->array->array_elements[new_max_array_access] == > > + b->var->data.max_array_access || > > + b->var->data.max_array_access == -1) && "Is last index is > > proper"); > > + > > + b->var->data.max_array_access = new_max_array_access; > > } > > > > block_size.num_active_uniforms = 0; > > I don't think this has any chance of being correct. max_array_access is set > based on actual, known accesses to the array. Changing that arbitrarily is > just lying to the rest of the compiler. Nothing good can come from that. > Since the type of the array is now wrong, my guess is that the test fails > (instead of crashes) because the part of the array that has the data the > shader wants is optimized away. This is changed not arbitrarily. All elements of the new optimized array type are 100% used so index of the last element of this array is a max_array_access, no? > > How does the mismatch between max_array_access and the type occur in the > first place? My guess is that this is the source of the bug. They *must* > agree at some point, or an earlier part of the compile would have generated > an error (as required by the spec). Do you know where the location of the code which should leads them to agreement? Unfortunately I didn't find this code. Shouldn't it be correct to lead them to agreement here? https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/compiler/glsl/link_uniform_blocks.cpp#L441 Seems like a new optimized array type should contain used/active instances only so it should be correct to suppose that last element of this array is 100% used and max_array_access can points to this element. Didn't I consider something here? About deqp test, it fails in such cases: layout(local_size_x = 1) in; layout(packed, binding = 0) buffer BlockB { float b[1]; } blockB[2]; void main (void) { blockB[1].b[0] = 77777.77777; } and looks like problem is here: https://android.googlesource.com/platform/external/deqp/+/master/modules/gles31/functional/es31fSSBOLayoutCase.cpp#2249 deqp test uses binding point 1 for BlockB[1] but when we use 'packed' layout the BlockB[0] instance is removed due to unuse and there is just one array instance BlockB[1] with assigned binding point 0 instead of 1. Looks like it should be something like this: if (layoutNdx >= 0) { const BlockLocation& blockLoc = blockLocations[layoutNdx]; if (blockLoc.size > 0) gl.bindBufferRange(GL_SHADER_STORAGE_BUFFER, bindingPoint, buffers[blockLoc.index].buffer, blockLoc.offset, blockLoc.size); bindingPoint += 1; } else if((block.getFlags() & LAYOUT_PACKED) == 0) { bindingPoint += 1; } I read the spec and looks like it is acceptable to remove the instance of array if it isn't active. I will provide the patch (to test it on Intel CI if possible) and more details on Monday (because I left everything on work machine). Thanks, Andrii.
Created attachment 143359 [details] one more issue. ERROR: ac_numPassed = 0, expected 1 void main (void) { bool allOk = true; allOk = allOk && compare_mat4(blockB[1].b[0][0], mat4(-8.0, -2.0, -1.0, 3.0, 2.0, 4.0, 1.0, -7.0, 8.0, -7.0, -1.0, 4.0, -2.0, -2.0, -6.0, 1.0)); allOk = allOk && compare_mat4(blockB[1].b[0][1], mat4(-6.0, -2.0, 2.0, 7.0, -7.0, 8.0, -2.0, -7.0, 2.0, 0.0, 3.0, 9.0, 8.0, 2.0, -6.0, -2.0)); allOk = allOk && compare_int(f, 9); allOk = allOk && compare_ivec4(blockD.g[0], ivec4(8, 4, -3, 0)); allOk = allOk && compare_ivec4(blockD.g[1], ivec4(-9, 7, -9, 8)); if (allOk) atomicCounterIncrement(ac_numPassed); blockB[1].b[0][0] = mat4(-6.0, -7.0, -9.0, 8.0, 3.0, -8.0, 9.0, -1.0, 5.0, 3.0, -9.0, -4.0, -8.0, -6.0, 9.0, 0.0); blockB[1].b[0][1] = mat4(-4.0, 9.0, -2.0, 0.0, 2.0, -3.0, 5.0, 0.0, 1.0, -2.0, -3.0, 2.0, 2.0, -6.0, 2.0, -8.0); c = mat4x3(-5.0, -3.0, -3.0, 0.0, 8.0, -9.0, 2.0, -4.0, 1.0, 8.0, 7.0, -4.0); blockD.g[0] = ivec4(-4, -3, -8, 8); blockD.g[1] = ivec4(-8, 3, -1, 4); } Hmm, unfortunately there is some additional issue in the mesa or deqp. So some variable were copied incorrectly by test or mesa or we lost something... So I will investigate it too. (Shared code is in attachment)
So the previously mentioned issue was due to issue in my fix. I fixed my it and locally I have the following results: ./deqp-gles31 -n dEQP-GLES31.functional.ssbo.* Test run totals: Passed: 2061/2061 (100.0%) Failed: 0/2061 (0.0%) Not supported: 0/2061 (0.0%) Warnings: 0/2061 (0.0%) It with mesa assertion fix and with deqp fix. Without the mesa fix it is still crashes even if the deqp fix is applied: Test case 'dEQP-GLES31.functional.ssbo.layout.random.all_per_block_buffers.18'.. ir_variable has maximum access out of bounds (1 vs 0) Aborted (core dumped) deqp: https://github.com/asimiklit/deqp/commit/91cff8150944213f6da533e281ee76d95ca00f21 mesa: https://gitlab.freedesktop.org/GL/mesa/commit/526a9fee0e40a450bc71092b27a3b89cbbf93fd7 So I am actually not 100% sure about which binding point should be assigned for BlockB[1] instance in the following case: layout(local_size_x = 1) in; layout(packed, binding = 0) buffer BlockB { float b[1]; } blockB[2]; void main (void) { blockB[1].b[0] = 77777.77777; } But according to next comment it should be 0 because of the BlockB[0] was optimized and there is the BlockB[1] only: /* The ARB_shading_language_420pack spec says: * * If the binding identifier is used with a uniform block instanced as * an array then the first element of the array takes the specified * block binding and each subsequent element takes the next consecutive * uniform block binding point. */ https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/compiler/glsl/link_uniform_blocks.cpp#L279-310 Mark is it possible to check both patches under Intel CI? mesa: fork: https://gitlab.freedesktop.org/asimiklit/mesa/ branch: max_array_access deqp: fork: https://github.com/asimiklit/deqp/ branch: array_instance_binding_fix
The combination of deqp/mesa patches passed CI. The only failure was piglit.spec.!opengl 1_1.tex-upside-down-miptree This test is new and was fixed on mesa master. The fix is probably not in the tested developer branch. You should contact deqp-external-requests@google.com to make your case that the deqp test is buggy.
(In reply to Mark Janes from comment #22) > The combination of deqp/mesa patches passed CI. The only failure was > > piglit.spec.!opengl 1_1.tex-upside-down-miptree > > This test is new and was fixed on mesa master. The fix is probably not in > the tested developer branch. > > You should contact deqp-external-requests@google.com to make your case that > the deqp test is buggy. Cool, thanks a lot. I will contact them tomorrow)
""" But according to next comment it should be 0 because of the BlockB[0] was optimized and there is the BlockB[1] only: /* The ARB_shading_language_420pack spec says: * * If the binding identifier is used with a uniform block instanced as * an array then the first element of the array takes the specified * block binding and each subsequent element takes the next consecutive * uniform block binding point. */ """ I don't think that's enough justification for the "mesa" way of doing it. Whether a block is eliminated or not is not specified by GLSL spec. Would be good to get some more experienced opinions on this. [Not mine...]
(In reply to asimiklit from comment #23) > (In reply to Mark Janes from comment #22) > > The combination of deqp/mesa patches passed CI. The only failure was > > > > piglit.spec.!opengl 1_1.tex-upside-down-miptree > > > > This test is new and was fixed on mesa master. The fix is probably not in > > the tested developer branch. > > > > You should contact deqp-external-requests@google.com to make your case that > > the deqp test is buggy. > > Cool, thanks a lot. > I will contact them tomorrow) I don't know, is it make sense to contact deqp-external-requests@google.com with questions about specs they used to write this test, according to the last Ilia comment? I think I rather won't contact them until we have 100% explanation how it should work. I have been trying to find some explanation how we should assign binding points for the ssbo array instances with packed layout case but unfortunately didn't find any concrete answer yet. It is possible that the deqp fix will migrate to mesa depends on answer which we will find.
(In reply to Ian Romanick from comment #17) > (In reply to asimiklit from comment #6) > > Created attachment 143288 [details] > > this simple program helps me to reproduce this issue. > > > > just share my simple reproducer) > > > > Run it in this way: > > > > simple_reproducer shader.comp > > It seems like this could be made into a piglit test that could be compiled > with glslparsertest. If you haven't already submitted such a test, please > do. :) I work on it)
(In reply to Ilia Mirkin from comment #24) > """ > But according to next comment it should be 0 because of the BlockB[0] was > optimized > and there is the BlockB[1] only: > > /* The ARB_shading_language_420pack spec says: > * > * If the binding identifier is used with a uniform block instanced as > * an array then the first element of the array takes the specified > * block binding and each subsequent element takes the next consecutive > * uniform block binding point. > */ > """ > > I don't think that's enough justification for the "mesa" way of doing it. > Whether a block is eliminated or not is not specified by GLSL spec. Would be > good to get some more experienced opinions on this. [Not mine...] I have asked about it here: https://github.com/KhronosGroup/OpenGL-API/issues/46 Hope that somebody will clarify it)
Chris Forbes at google found a patch that explains at least some of the reason for this test regression: https://android-review.googlesource.com/c/platform/external/deqp/+/901894 I just realized that this bug is a dupe the one I opened when the test regressed. Since the discussion is here, I'll close that one.
*** Bug 109260 has been marked as a duplicate of this bug. ***
(In reply to Mark Janes from comment #28) > https://android-review.googlesource.com/c/platform/external/deqp/+/901894 Mesa still asserts with this fix. I also tested Andrii's mesa patch with the dEQP fix and the test fails. Since non-mesa drivers have found issues with the original dEQP change, I suspect there are still deeper problems with the test.
(In reply to Mark Janes from comment #30) > (In reply to Mark Janes from comment #28) > > https://android-review.googlesource.com/c/platform/external/deqp/+/901894 > > Mesa still asserts with this fix. I also tested Andrii's mesa patch with > the dEQP fix and the test fails. Do you mean the Chris's dEQP fix here, yes? But looks like the mentioned Chris's dEQP fix considers some GL limitations and doesn't affect the expectations of binding points. Also the assertion is a separate issue, I created the piglit test for that: https://patchwork.freedesktop.org/patch/286287/ But yes, we unable to fix the test fail without assertion because of crash :-) > > Since non-mesa drivers have found issues with the original dEQP change, I > suspect there are still deeper problems with the test. Possible they have the same issue with binding points mismatch after optimizations by glsl compiler. They could try this fix/hack for deqp which is already helped us: https://github.com/asimiklit/deqp/commit/91cff8150944213f6da533e281ee76d95ca00f21 If it helps them we will know that it is a common issue and it could expedite this: https://github.com/KhronosGroup/OpenGL-API/issues/46
(In reply to andrii simiklit from comment #31) > (In reply to Mark Janes from comment #30) > > (In reply to Mark Janes from comment #28) > > > https://android-review.googlesource.com/c/platform/external/deqp/+/901894 > > > > Mesa still asserts with this fix. I also tested Andrii's mesa patch with > > the dEQP fix and the test fails. > Do you mean the Chris's dEQP fix here, yes? > But looks like the mentioned Chris's dEQP fix considers some GL limitations > and doesn't affect the expectations of binding points. > > Also the assertion is a separate issue, I created the piglit test for that: > https://patchwork.freedesktop.org/patch/286287/ > But yes, we unable to fix the test fail without assertion because of crash > :-) > > > > > Since non-mesa drivers have found issues with the original dEQP change, I > > suspect there are still deeper problems with the test. > Possible they have the same issue with binding points mismatch after > optimizations by glsl compiler. > They could try this fix/hack for deqp which is already helped us: > https://github.com/asimiklit/deqp/commit/ > 91cff8150944213f6da533e281ee76d95ca00f21 > If it helps them we will know that it is a common issue and it could > expedite this: > https://github.com/KhronosGroup/OpenGL-API/issues/46 So we have an answer from Piers Daniell: "I believe all buffer binding points should be consumed, regardless whether the array elements are used or not. This would be the behavior of least surprise to the developer. I didn't see any language that would indicate that unused elements should not be counted when assigning the element to the buffer binding point." Looks like according to that current mesa implementation is mistaken and dEQP test is right. So my dEQP "fix" should migrate to mesa somehow. Are Ilia and Ian agree it? Is nobody against if I investigate and try to fix it? Seems like the the binding point increment is done somewhere here: https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/compiler/glsl/link_uniform_blocks.cpp#L279-310
(In reply to andrii simiklit from comment #32) > (In reply to andrii simiklit from comment #31) > > (In reply to Mark Janes from comment #30) > > > (In reply to Mark Janes from comment #28) > > > > https://android-review.googlesource.com/c/platform/external/deqp/+/901894 > > > > > > Mesa still asserts with this fix. I also tested Andrii's mesa patch with > > > the dEQP fix and the test fails. > > Do you mean the Chris's dEQP fix here, yes? > > But looks like the mentioned Chris's dEQP fix considers some GL limitations > > and doesn't affect the expectations of binding points. > > > > Also the assertion is a separate issue, I created the piglit test for that: > > https://patchwork.freedesktop.org/patch/286287/ > > But yes, we unable to fix the test fail without assertion because of crash > > :-) > > > > > > > > Since non-mesa drivers have found issues with the original dEQP change, I > > > suspect there are still deeper problems with the test. > > Possible they have the same issue with binding points mismatch after > > optimizations by glsl compiler. > > They could try this fix/hack for deqp which is already helped us: > > https://github.com/asimiklit/deqp/commit/ > > 91cff8150944213f6da533e281ee76d95ca00f21 > > If it helps them we will know that it is a common issue and it could > > expedite this: > > https://github.com/KhronosGroup/OpenGL-API/issues/46 > > So we have an answer from Piers Daniell: > "I believe all buffer binding points should be consumed, regardless > whether > the array elements are used or not. This would be the behavior of least > surprise to the developer. I didn't see any language that would indicate > that unused elements should not be counted when assigning the element to > the buffer binding point." I think this basically agrees with my earlier sentiment that we shouldn't trim elements from the beginning of the array. It's generally ok (and in some cases expected) to trim elements from the end.
(In reply to asimiklit from comment #26) > (In reply to Ian Romanick from comment #17) > > (In reply to asimiklit from comment #6) > > > Created attachment 143288 [details] > > > this simple program helps me to reproduce this issue. > > > > > > just share my simple reproducer) > > > > > > Run it in this way: > > > > > > simple_reproducer shader.comp > > > > It seems like this could be made into a piglit test that could be compiled > > with glslparsertest. If you haven't already submitted such a test, please > > do. :) > > I work on it) The two piglit tests were suggested: https://patchwork.freedesktop.org/patch/286287/
Created attachment 143430 [details] [review] patch to disallow an elimination of the first unused ub/ssbo array elements (In reply to Ian Romanick from comment #33) > (In reply to andrii simiklit from comment #32) > > (In reply to andrii simiklit from comment #31) > > > (In reply to Mark Janes from comment #30) > > > > (In reply to Mark Janes from comment #28) > > > > > https://android-review.googlesource.com/c/platform/external/deqp/+/901894 > > > > > > > > Mesa still asserts with this fix. I also tested Andrii's mesa patch with > > > > the dEQP fix and the test fails. > > > Do you mean the Chris's dEQP fix here, yes? > > > But looks like the mentioned Chris's dEQP fix considers some GL limitations > > > and doesn't affect the expectations of binding points. > > > > > > Also the assertion is a separate issue, I created the piglit test for that: > > > https://patchwork.freedesktop.org/patch/286287/ > > > But yes, we unable to fix the test fail without assertion because of crash > > > :-) > > > > > > > > > > > Since non-mesa drivers have found issues with the original dEQP change, I > > > > suspect there are still deeper problems with the test. > > > Possible they have the same issue with binding points mismatch after > > > optimizations by glsl compiler. > > > They could try this fix/hack for deqp which is already helped us: > > > https://github.com/asimiklit/deqp/commit/ > > > 91cff8150944213f6da533e281ee76d95ca00f21 > > > If it helps them we will know that it is a common issue and it could > > > expedite this: > > > https://github.com/KhronosGroup/OpenGL-API/issues/46 > > > > So we have an answer from Piers Daniell: > > "I believe all buffer binding points should be consumed, regardless > > whether > > the array elements are used or not. This would be the behavior of least > > surprise to the developer. I didn't see any language that would indicate > > that unused elements should not be counted when assigning the element to > > the buffer binding point." > > I think this basically agrees with my earlier sentiment that we shouldn't > trim elements from the beginning of the array. It's generally ok (and in > some cases expected) to trim elements from the end. Are you talking about something like attached variant?
(In reply to Ian Romanick from comment #33) > (In reply to andrii simiklit from comment #32) > > (In reply to andrii simiklit from comment #31) > > > (In reply to Mark Janes from comment #30) > > > > (In reply to Mark Janes from comment #28) > > > > > https://android-review.googlesource.com/c/platform/external/deqp/+/901894 > > > > > > > > Mesa still asserts with this fix. I also tested Andrii's mesa patch with > > > > the dEQP fix and the test fails. > > > Do you mean the Chris's dEQP fix here, yes? > > > But looks like the mentioned Chris's dEQP fix considers some GL limitations > > > and doesn't affect the expectations of binding points. > > > > > > Also the assertion is a separate issue, I created the piglit test for that: > > > https://patchwork.freedesktop.org/patch/286287/ > > > But yes, we unable to fix the test fail without assertion because of crash > > > :-) > > > > > > > > > > > Since non-mesa drivers have found issues with the original dEQP change, I > > > > suspect there are still deeper problems with the test. > > > Possible they have the same issue with binding points mismatch after > > > optimizations by glsl compiler. > > > They could try this fix/hack for deqp which is already helped us: > > > https://github.com/asimiklit/deqp/commit/ > > > 91cff8150944213f6da533e281ee76d95ca00f21 > > > If it helps them we will know that it is a common issue and it could > > > expedite this: > > > https://github.com/KhronosGroup/OpenGL-API/issues/46 > > > > So we have an answer from Piers Daniell: > > "I believe all buffer binding points should be consumed, regardless > > whether > > the array elements are used or not. This would be the behavior of least > > surprise to the developer. I didn't see any language that would indicate > > that unused elements should not be counted when assigning the element to > > the buffer binding point." > > I think this basically agrees with my earlier sentiment that we shouldn't > trim elements from the beginning of the array. It's generally ok (and in > some cases expected) to trim elements from the end. Ilia could you please share your thoughts about this solution? Do you agree with Ian suggestion as to disallow the elimination from the beginning of the array? I am asking about it to understand have we consensus as to a direction of the fix or not.
fwiw, after rebasing deqp, I started hitting the same thing (freedreno/a6xx) with debug mesa builds vs dEQP-GLES31.functional.ssbo.layout.random.all_per_block_buffers.18 So I guess the same issue.. I'll read more thru the rest of this bz tomorrow since already getting late, but just thought I'd mention it (and add myself to cc on this bz)
(In reply to asimiklit from comment #36) > Ilia could you please share your thoughts about this solution? > Do you agree with Ian suggestion as to disallow the elimination from the > beginning of the array? > I am asking about it to understand have we consensus as to a direction of > the fix or not. Sounds good to me... I think you can do something like layout(binding=n) foo[6] means that n..n+5 are all taken and assigned to the relevant items. However if you just have "foo[6]", without the explicit binding, you can (and should) eliminate any pre- and post- elements, i.e. before the first and after the last. However if foo[1] and foo[5] are used, then they should be 4 binding points away from one another. Hope that makes sense.
I just noticed that this new test passes for 32 bit builds. Does that surprise anyone else? http://mesa-ci.01.org/mesa_master_daily/builds/4806/group/e60513df13ade427f01bb7334bd5174e
(In reply to Mark Janes from comment #39) > I just noticed that this new test passes for 32 bit builds. > > Does that surprise anyone else? > > http://mesa-ci.01.org/mesa_master_daily/builds/4806/group/ > e60513df13ade427f01bb7334bd5174e Thanks that pointed that out. I didn't see any platform specific code there. I will post an update here as far as I figure out the reason.
(In reply to andrii simiklit from comment #40) > (In reply to Mark Janes from comment #39) > > I just noticed that this new test passes for 32 bit builds. > > > > Does that surprise anyone else? > > > > http://mesa-ci.01.org/mesa_master_daily/builds/4806/group/ > > e60513df13ade427f01bb7334bd5174e > > Thanks that pointed that out. > I didn't see any platform specific code there. > I will post an update here as far as I figure out the reason. Unfortunately I can't to reproduce this behavior locally. I built the 32-bit debug mesa + 32-bit debug piglit and have the following results: asimiklit@asimiklit-pc:~/projects/piglit32$ bin/glslparsertest tests/spec/arb_shader_storage_buffer_object/compiler /unused-array-element.comp pass 4.50 ir_variable has maximum access out of bounds (1 vs 0) Aborted (core dumped) glslparsertest and mesa 100% have 32-bit architecture: asimiklit@asimiklit-pc:~/projects/piglit32$ file bin/glslparsertest bin/glslparsertest: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=0243d96a1caf7c54189447516b066e5582ef86aa, with debug_info, not stripped asimiklit@asimiklit-pc:~/projects/piglit32$ ldd bin/glslparsertest ........ libgbm.so.1 => /home/.../mesa_versions/1802_32_dbg/lib/libgbm.so.1 (0xf7a9d000) libGL.so.1 => /home/.../mesa_versions/1802_32_dbg/lib/libGL.so.1 (0xf7a09000) libEGL.so.1 => /home/.../mesa_versions/1802_32_dbg/lib/libEGL.so.1 (0xf7554000) libglapi.so.0 => /home/.../mesa_versions/1802_32_dbg/lib/libglapi.so.0 (0xf7466000) ....... asimiklit@asimiklit-pc:~/projects/piglit32$ file /home/.../mesa_versions /1802_32_dbg/lib/libGL.so.1.2.0 /home/.../mesa_versions/1802_32_dbg/lib/libGL.so.1.2.0: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=8708ca7f174b267838c5b1ffe71ca68d5307f62d, with debug_info, not stripped PS: I know that 1802 in the folder name isn't a correct name for the latest mesa folder name but I just forgot to update it in prefix :-) Mark could you please clarify which mesa configuration is used with 32-bit piglit, debug or release? Because I see the same behavior just with a release mesa configuration.
i965 CI runs debug builds by default, but for mesa it uses these meson configurations: -Dbuildtype=release -Db_ndebug=true We must catch assertions in the CI, however debug builds are much slower to execute tests.
On my local system, I can easily reproduce the 32-bit pass, and the 64-bit crash. This is unusual, however the dEQP bug that started all of this also has a different 32 and 64 bit signature. --------------------------------------- /tmp/build_root/m32/lib/piglit/bin/glslparsertest /tmp/build_root/m32/lib/piglit/tests/spec/arb_shader_storage_buffer_object/compiler/unused-array-element.comp pass 4.30 piglit: debug: Requested an OpenGL 4.3 Core Context, and received a matching 4.5 context Successfully compiled compute shader /tmp/build_root/m32/lib/piglit/tests/spec/arb_shader_storage_buffer_object/compiler/unused-array-element.comp: (no compiler output) PIGLIT: {"result": "pass" } --------------------------------------- /tmp/build_root/m64/lib/piglit/bin/glslparsertest /tmp/build_root/m64/lib/piglit/tests/spec/arb_shader_storage_buffer_object/compiler/unused-array-element.comp pass 4.30 piglit: debug: Requested an OpenGL 4.3 Core Context, and received a matching 4.5 context ir_variable has maximum access out of bounds (1 vs 0) Aborted ---------------------------------------
(In reply to Mark Janes from comment #42) > i965 CI runs debug builds by default, but for mesa it uses these meson > configurations: > > -Dbuildtype=release -Db_ndebug=true > > We must catch assertions in the CI, however debug builds are much slower to > execute tests. My tests depends on DEBUG macro: https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/compiler/glsl/ir_validate.cpp#L1059-1073 So if DEBUG macro is not defined my tests should not call critical code which leads mesa to crash and should just pass. Also from 'meson.build': https://gitlab.freedesktop.org/mesa/mesa/blob/master/meson.build#L742-745 "# Define DEBUG for debug builds only (debugoptimized is not included on this one) if get_option('buildtype') == 'debug' pre_args += '-DDEBUG' endif " So as far as I understood DEBUG macro will be defined for '-Dbuildtype=debug' configuration only but as you mentioned above the mesa is compiled with '-Dbuildtype=release' in this case my tests should just pass without crash. Does CI use the same configurations for x32 and x64 mesa builds or maybe x64 uses debug mesa config '-Dbuildtype=debug'?
PKG_CONFIG_PATH=/tmp/build_root/m64/lib/pkgconfig:/tmp/build_root/m64/lib64/pkgconfig:/tmp/build_root/m64/lib/x86_64-linux-gnu/pkgconfig:/usr/lib/pkgconfig:/usr/lib64/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib/pkgconfig CC=ccache gcc LD_LIBRARY_PATH=/tmp/build_root/m64/lib:/tmp/build_root/m64/lib64:/tmp/build_root/m64/lib/x86_64-linux-gnu:/usr/lib:/usr/lib64:/usr/lib/x86_64-linux-gnu CPPFLAGS=-DDEBUG CXX=ccache g++ meson /home/jenkins/workspace/Leeroy_2/repos/mesa/build_mesa_m64 --prefix /tmp/build_root/m64 --libdir lib -Ddri-drivers=i965,i915 -Dvulkan-drivers=intel -Dplatforms=x11,drm -Dtools=intel -Dgallium-drivers=iris -Dllvm=false PKG_CONFIG_PATH=/tmp/build_root/m32/lib/pkgconfig:/tmp/build_root/m32/lib/i386-linux-gnu/pkgconfig:/usr/lib/pkgconfig:/usr/lib/i386-linux-gnu/pkgconfig:/usr/lib/pkgconfig CC=ccache gcc LD_LIBRARY_PATH=/tmp/build_root/m32/lib:/tmp/build_root/m32/lib/i386-linux-gnu:/usr/lib:/usr/lib/i386-linux-gnu CPPFLAGS=-DDEBUG CXX=ccache g++ meson /home/jenkins/workspace/Leeroy/repos/mesa/build_mesa_m32 --prefix /tmp/build_root/m32 --libdir lib --cross-file /home/jenkins/workspace/Leeroy/mesa/../repos/mesa_ci/build_support/x86-linux-gcc.cross -Ddri-drivers=i965,i915 -Dvulkan-drivers=intel -Dplatforms=x11,drm -Dtools=intel -Dgallium-drivers=iris -Dllvm=false Both builds pass CPPFLAGS=-DDEBUG, but the cross file also specifies cpp_args, which may conflict. I'll check with Dylan, who created that automation.
I confirmed that the cross file is overriding our 32 bit flags that we pass through the environment.
Created attachment 143478 [details] attachment-11105-0.html It's not that the cross file is overriding, CFLAGS and friends are for native (build) targets, not cross (host) targets. I'm on mobile, please excuse autocorrect fail. On Tue, Feb 26, 2019, 09:33 <bugzilla-daemon@freedesktop.org> wrote: > *Comment # 46 <https://bugs.freedesktop.org/show_bug.cgi?id=109532#c46> on > bug 109532 <https://bugs.freedesktop.org/show_bug.cgi?id=109532> from Mark > Janes <mark.a.janes@intel.com> * > > I confirmed that the cross file is overriding our 32 bit flags that we pass > through the environment. > > ------------------------------ > You are receiving this mail because: > > - You are on the CC list for the bug. > >
(In reply to Mark Janes from comment #46) > I confirmed that the cross file is overriding our 32 bit flags that we pass > through the environment. (In reply to Dylan Baker from comment #47) > Created attachment 143478 [details] > attachment-11105-0.html > > It's not that the cross file is overriding, CFLAGS and friends are for > native (build) targets, not cross (host) targets. > > I'm on mobile, please excuse autocorrect fail. > > On Tue, Feb 26, 2019, 09:33 <bugzilla-daemon@freedesktop.org> wrote: > > > *Comment # 46 <https://bugs.freedesktop.org/show_bug.cgi?id=109532#c46> on > > bug 109532 <https://bugs.freedesktop.org/show_bug.cgi?id=109532> from Mark > > Janes <mark.a.janes@intel.com> * > > > > I confirmed that the cross file is overriding our 32 bit flags that we pass > > through the environment. > > > > ------------------------------ > > You are receiving this mail because: > > > > - You are on the CC list for the bug. > > > > To be 100% sure what going on with the DEBUG and NDEBUG macros I suggest to build a following branch: https://gitlab.freedesktop.org/asimiklit/mesa/tree/check_if_DEBUG_enabled This branch contains following commit: void validate_ir_tree(exec_list *instructions) { +#ifdef DEBUG + hmm_DEBUG_MACRO_is_defined = 1; +#else + hmm_DEBUG_MACRO_is_NOT_defined = 1; +#endif + +#ifdef NDEBUG + hmm_NDEBUG_is_defined_so_asserts_are_disabled = 1; +#else + hmm_NDEBUG_is_NOT_defined_so_asserts_are_enabled = 1; +#endif It should just produce compiling errors. But depends on a error message we could understand which macros are defined :-) Actually I tried to run it here: https://mesa-ci.01.org/global_logic/builds/89/group/63a9f0ea7bb98050796b649e85481845 But I don't have an ability to see compiler error message because there is displayed just that and just for m64: stdout: Command '['make', '-j', '18']' returned non-zero exit status 2 stderr: NULL
After discussing https://github.com/KhronosGroup/OpenGL-API/issues/46 in the Khronos call today, I realized that my thinking about this bug may have been slightly incorrect. I believe that there are two separate issues here. 1. The issue with the array type and the maximum index. 2. The way the bindings are assigned to the elements that are used. No matter what happens, if the shader says layout(packed, binding = 3) buffer Block { float b[1]; } block[4]; Then the thing accessed in the shader as block[2].b[0] **must** be at binding point 5 (from the API). The user as explicitly set that to binding 5 by the declaration in the shader, so we absolutely have to respect that. Had the application not explicitly set the bindings, I think we would be free to assign whatever values we wanted. I believe that means the bindings set by the CTS in the problematic test are correct. I think we also cannot reuse the intermediate bindings. The app may have expectations that those bindings are for elements of block, and it may blindly bind buffers to those bindings. If those bindings are used for other things, only problems can result. I think this means we effectively cannot eliminate array elements from buffer block arrays that have explicit bindings. 85% sure, anyway.
(In reply to Ian Romanick from comment #49) > After discussing https://github.com/KhronosGroup/OpenGL-API/issues/46 in the > Khronos call today, I realized that my thinking about this bug may have been > slightly incorrect. I believe that there are two separate issues here. > > 1. The issue with the array type and the maximum index. > > 2. The way the bindings are assigned to the elements that are used. > > No matter what happens, if the shader says > > layout(packed, binding = 3) buffer Block > { > float b[1]; > } block[4]; > > Then the thing accessed in the shader as block[2].b[0] **must** be at > binding point 5 (from the API). The user as explicitly set that to binding > 5 by the declaration in the shader, so we absolutely have to respect that. > Had the application not explicitly set the bindings, I think we would be > free to assign whatever values we wanted. I believe that means the bindings > set by the CTS in the problematic test are correct. > > I think we also cannot reuse the intermediate bindings. The app may have > expectations that those bindings are for elements of block, and it may > blindly bind buffers to those bindings. If those bindings are used for > other things, only problems can result. I think this means we effectively > cannot eliminate array elements from buffer block arrays that have explicit > bindings. 85% sure, anyway. Ian could you please confirm if I understood everything correctly and clarify the **implicit case** too. 1. So everything is clear when user explicitly set binding point, in this case we should not optimize anything at all. 2. But looks like we are not 100% sure what to do with the **implicit case**. Let's consider options: layout(packed) buffer Block { float b[1]; } block[6]; (The shader uses block[1] and block[4]) 2.1. As it works now. Eliminate any unused array elements and shrink array to reuse binding points. (Note: will not pass deqp test) + we spend just 2 binding points for Block[1], Block[4] - it very unexpected behavior for user because: Block[0] is eliminated Block[1] binding point is 0 Block[2] is eliminated Block[3] is eliminated Block[4] binding point is 1 Block[5] is eliminated 2.2. Find first and last used elements and eliminate any before the first and after the last. This option was suggested by Ilia, Ian do you agree it? (Note: looks like should pass deqp test) + we spend 4 of 6 binding points - it better than 2.1 but looks like still not very expected behavior for user: Block[0] is eliminated Block[1] binding point is 0 Block[2] binding point is 1 Block[3] binding point is 2 Block[4] binding point is 3 Block[5] is eliminated 2.3. Find just last used element and eliminate any after the last. + we spend 5 of 6 binding points (Note: but if there is a big array with a big unused tail it should helps) +- quite expected behavior for user: Block[0] binding point is 0 Block[1] binding point is 1 Block[2] binding point is 2 Block[3] binding point is 3 Block[4] binding point is 4 Block[5] is eliminated 2.4. Avoid elimination at all like for explicitly case + very expected behavior for user - we spend 6 of 6 binding points Block[0] binding point is 0 Block[1] binding point is 1 Block[2] binding point is 2 Block[3] binding point is 3 Block[4] binding point is 4 Block[5] binding point is 5 I created MR for it: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/332 It has WIP status because we aren't reached an agreement yet. But anyway I will be glad to see any advices there.
We have one more answer from Piers Daniell: "We discussed this in the OpenGL/ES working group meeting and agreed that eliminating unused elements from the interface block array is not desirable. There is no statement in the spec that this takes place and it would be highly implementation dependent if it happens. If the application has an "interface" in the shader they need to match up with the API it would be quite confusing to have the binding point get compacted. So the answer is no, the binding points aren't affected by unused elements in the interface block array." So looks like according to it we should remove this optimization at all. Regardless whether explicit or implicit case we should not optimize it. I guess we can remove this optimization for 'packed' blocks by removing this exclusive cases and just handle it like 'std140' and 'shared' layouts: https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/compiler/glsl/link_uniform_block_active_visitor.cpp#L254-261 and https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/compiler/glsl/link_uniform_block_active_visitor.cpp#L170-171
Created attachment 143535 [details] [review] draft of a solution Currently I am trying to find another way to fix this issue to avoid removal of an optimization because this optimization isn't only for minimization of the binding points usage but it also helps to minimize usage of some other stuff, however I'm not sure if this is even possible. But all Intel CI tests are passed: https://mesa-ci.01.org/global_logic/builds/101/group/63a9f0ea7bb98050796b649e85481845
(In reply to asimiklit from comment #52) > Created attachment 143535 [details] [review] [review] > draft of a solution > > Currently I am trying to find another way to fix this issue > to avoid removal of an optimization because this optimization > isn't only for minimization of the binding points usage but > it also helps to minimize usage of some other stuff, > however I'm not sure if this is even possible. > > But all Intel CI tests are passed: > https://mesa-ci.01.org/global_logic/builds/101/group/ > 63a9f0ea7bb98050796b649e85481845 I have suggested this solution here: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/332
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/mesa/mesa/issues/823.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.