I'm finding that my Iris 6100 (Macbook Pro Retina 2015) is reporting support for GL_ARB_compute_shader from the 3.0 compatibility context, which I assume is a bug. Any compute shader I try to compile fails, and I can't see how compute shaders could happily coexist in a 3.0 + glsl 130 environment anyway. I'm running Ubuntu 16.04. This is the driver information from glxinfo: Extended renderer info (GLX_MESA_query_renderer): Vendor: Intel Open Source Technology Center (0x8086) Device: Mesa DRI Intel(R) Iris 6100 (Broadwell GT3) (0x162b) Version: 12.1.0 Accelerated: yes Video memory: 3072MB Unified memory: yes Preferred profile: core (0x1) Max core profile version: 4.4 Max compat profile version: 3.0 Max GLES1 profile version: 1.1 Max GLES[23] profile version: 3.1 You can see below that ARB_compute_shader is included in the extensions string for the compatibility profile: OpenGL version string: 3.0 Mesa 12.1.0-devel OpenGL shading language version string: 1.30 OpenGL context flags: (none) OpenGL extensions: GL_3DFX_texture_compression_FXT1, GL_AMD_conservative_depth, GL_AMD_draw_buffers_blend, GL_AMD_seamless_cubemap_per_texture, GL_AMD_shader_trinary_minmax, GL_ANGLE_texture_compression_dxt3, GL_ANGLE_texture_compression_dxt5, GL_APPLE_object_purgeable, GL_APPLE_packed_pixels, GL_APPLE_vertex_array_object, GL_ARB_ES2_compatibility, GL_ARB_ES3_compatibility, GL_ARB_arrays_of_arrays, GL_ARB_blend_func_extended, GL_ARB_buffer_storage, GL_ARB_clear_buffer_object, GL_ARB_clear_texture, GL_ARB_clip_control, GL_ARB_color_buffer_float, GL_ARB_compressed_texture_pixel_storage, GL_ARB_compute_shader, GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth, GL_ARB_copy_buffer, GL_ARB_copy_image, GL_ARB_cull_distance, Hopefully this is the right place to report this, and this is sufficient information to go on. Please let me know if there's something else I can provide to help out.
Thanks for the report Evan. I'm not sure why compute shaders are enabled in compatibility profiles. I'm Ccing Nanley and Jordan who might have a better idea.
What's so incompatible about compute shaders and GL 3.0? ssbo's are exposed in compat, as are images, and atomic counters. Note that there was a bug for a while that prevented creating compute shaders in compat contexts, but iirc it's long been fixed (specifically commit 5e2d25894b962a rectified the situation).
(In reply to Ilia Mirkin from comment #2) > What's so incompatible about compute shaders and GL 3.0? ssbo's are exposed > in compat, as are images, and atomic counters. > > Note that there was a bug for a while that prevented creating compute > shaders in compat contexts, but iirc it's long been fixed (specifically > commit 5e2d25894b962a rectified the situation). Regarding i965, I think the limits check in brw_initialize_context_constants would need to be adjusted for compat profiles. The extension spec says it requires OpenGL 4.2.
Assuming we don't want to support this (I'm not sure why we would)...we just need to drop GLL from the ARB_compute_shader entry in extensions_table.h.
(In reply to Kenneth Graunke from comment #4) > Assuming we don't want to support this (I'm not sure why we would) A ton of esp nvidia compute demos use compat contexts (without using any of the more questionable compat features). This would preclude them from working. Why disallow it when it already Just Works (tm)?
For what it's worth, I started looking at the actual errors I was getting from my shader to see if it could in fact work. Enabling #extension GL_ARB_shader_image_load_store cleans up the first error and leaves me with this: 0:140(24): error: local_size_x qualifier requires GLSL 4.30 or GLSL ES 3.10 or ARB_compute_shader Enabling #extension GL_ARB_compute_shader (which seems like a strange thing to have to do in a compute shader, also the spec make so mention of this directive existing) actually gets the glCompile call to succeed. Upon linking the program though I get these errors: error: Too many compute shader texture samplers error: Too many compute shader image uniforms (1 > 0) (this shader has one image and one sampler) Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both return 0 on this context, while the ARB_compute_shader spec says the minimum values for these should be 16 and 8 respectively. I don't think the issue here is whether a compatibility context is allowed to support ARB_compute_shader (many do on Windows) but whether this particular 3.0 context really intended to expose it or if it was leaking out by accident, right? Evan
(In reply to Evan Odabashian from comment #6) > 0:140(24): error: local_size_x qualifier requires GLSL 4.30 or GLSL ES 3.10 > or ARB_compute_shader > > Enabling #extension GL_ARB_compute_shader (which seems like a strange thing > to have to do in a compute shader, also the spec make so mention of this > directive existing) That sounds like a separate bug. In a core context, you'd also get it if you had specified #version 420. We should probably remove the ARB_compute_shader_enable logic and instead rely on the current stage type. Hopefully that's accessible in the "right" places. > error: Too many compute shader texture samplers > error: Too many compute shader image uniforms (1 > 0) > > (this shader has one image and one sampler) > > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > return 0 on this context, while the ARB_compute_shader spec says the minimum > values for these should be 16 and 8 respectively. That sounds like a bug specific to i965 initialization, trivially fixable.
(In reply to Ilia Mirkin from comment #7) > (In reply to Evan Odabashian from comment #6) > > error: Too many compute shader texture samplers > > error: Too many compute shader image uniforms (1 > 0) > > > > (this shader has one image and one sampler) > > > > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > > return 0 on this context, while the ARB_compute_shader spec says the minimum > > values for these should be 16 and 8 respectively. > > That sounds like a bug specific to i965 initialization, trivially fixable. I think updating the stage_exists init in brw_initialize_context_constants might help this. Right now it will always set stage_exits[MESA_SHADER_COMPUTE] to false when ctx->API == API_OPENGL_COMPAT.
(In reply to Jordan Justen from comment #8) > (In reply to Ilia Mirkin from comment #7) > > (In reply to Evan Odabashian from comment #6) > > > error: Too many compute shader texture samplers > > > error: Too many compute shader image uniforms (1 > 0) > > > > > > (this shader has one image and one sampler) > > > > > > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > > > return 0 on this context, while the ARB_compute_shader spec says the minimum > > > values for these should be 16 and 8 respectively. > > > > That sounds like a bug specific to i965 initialization, trivially fixable. > > I think updating the stage_exists init in > brw_initialize_context_constants might help this. > > Right now it will always set stage_exits[MESA_SHADER_COMPUTE] > to false when ctx->API == API_OPENGL_COMPAT. I tried hacking the brw_initialize_context_constants code to set stage_exists[MESA_SHADER_COMPUTE] to true for the compatibility context and tested my application again. I have non-zero MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS now, but my shader fails to compile with: error: extension `GL_ARB_image_load_store' unsupported in compute shader And of course if I disable that extension I'm back to the error I had originally: error: unrecognized layout identifier `r32f' caused by this line in the shader: layout(r32f, binding = 1) uniform writeonly image2D outTexture; Re: the proposal to remove 'GLL' for ARB_compute_shader in the extension table, this would remove it from compatibility contexts in all drivers (not just i965)? I'm not very familiar with mesa (as is probably evident) but if there are other drivers with compat. profiles that can support compute shaders then I would agree it seems heavy handed to take it away from them. On Windows for instance all compatibility profiles on Intel, Nvidia, AMD can do compute shaders and everything else their core profiles can generally. Is there a way to just not advertise ARB_compute_shader specifically for the i965 compat. profile?
(In reply to Evan Odabashian from comment #9) > error: extension `GL_ARB_image_load_store' unsupported in compute shader Presumably that's because the extension is actually called GL_ARB_shader_image_load_store.
(In reply to Matt Turner from comment #10) > (In reply to Evan Odabashian from comment #9) > > error: extension `GL_ARB_image_load_store' unsupported in compute shader > > Presumably that's because the extension is actually called > GL_ARB_shader_image_load_store. So it is :-) With that fixed I'm happy to report the shader compiles and appears to work correctly.
(In reply to Evan Odabashian from comment #6) > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > return 0 on this context, while the ARB_compute_shader spec says the minimum > values for these should be 16 and 8 respectively. To confirm, there *is* still a bug here, and this is it. Right?
(In reply to Matt Turner from comment #12) > (In reply to Evan Odabashian from comment #6) > > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > > return 0 on this context, while the ARB_compute_shader spec says the minimum > > values for these should be 16 and 8 respectively. > > To confirm, there *is* still a bug here, and this is it. Right? Yes there is still a bug I would say, and from a user perspective the problem is those 0 counts for max image units and texture uniforms. I can't really speak to what else might need to happen under the hood, except that forcing 'stage_exists[MESA_SHADER_COMPUTE] = true' seemed to get everything working for me.
(In reply to Evan Odabashian from comment #13) > (In reply to Matt Turner from comment #12) > > (In reply to Evan Odabashian from comment #6) > > > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > > > return 0 on this context, while the ARB_compute_shader spec says the minimum > > > values for these should be 16 and 8 respectively. > > > > To confirm, there *is* still a bug here, and this is it. Right? > > Yes there is still a bug I would say, and from a user perspective the > problem is those 0 counts for max image units and texture uniforms. I can't > really speak to what else might need to happen under the hood, except that > forcing 'stage_exists[MESA_SHADER_COMPUTE] = true' seemed to get everything > working for me. I'll send a patch to mesa-dev. Can you test it Evan?
(In reply to Jordan Justen from comment #14) > (In reply to Evan Odabashian from comment #13) > > (In reply to Matt Turner from comment #12) > > > (In reply to Evan Odabashian from comment #6) > > > > Querying MAX_COMPUTE_TEXTURE_IMAGE_UNITS and MAX_COMPUTE_IMAGE_UNIFORMS both > > > > return 0 on this context, while the ARB_compute_shader spec says the minimum > > > > values for these should be 16 and 8 respectively. > > > > > > To confirm, there *is* still a bug here, and this is it. Right? > > > > Yes there is still a bug I would say, and from a user perspective the > > problem is those 0 counts for max image units and texture uniforms. I can't > > really speak to what else might need to happen under the hood, except that > > forcing 'stage_exists[MESA_SHADER_COMPUTE] = true' seemed to get everything > > working for me. > > I'll send a patch to mesa-dev. Can you test it Evan? Just tried it out, seems to work fine with my test case.
commit 7bcb94bc2fc45fde806ad3fd062bf2ce97342359 i965/compute: Allow ARB_compute_shader in compat profile
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.