Summary: | glMemoryBarrier is backwards | ||
---|---|---|---|
Product: | Mesa | Reporter: | Matias N. Goldberg <dark_sylinc> |
Component: | Drivers/Gallium/radeonsi | Assignee: | Default DRI bug account <dri-devel> |
Status: | RESOLVED NOTOURBUG | QA Contact: | Default DRI bug account <dri-devel> |
Severity: | major | ||
Priority: | medium | ||
Version: | git | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: |
Description
Matias N. Goldberg
2017-06-24 04:54:33 UTC
You're misinterpreting the spec. glMemoryBarrier ensures that **writes from shaders** are visible to whatever consumer you indicate with the given flag bits. What you seem to be trying to do is ensure that **writes via the framebuffer** are visible in subsequent compute shader invocations. For that, you need to either: (1) Bind a different framebuffer, or (probably more appropriate to what you're trying to do) (2) use glTextureBarrier. That can't be right. You're suggesting that in order to synchronize writes from FBO with a Compute Shader I am going to dispatch (which btw the Compute Shader is accessing this fbo as a regular sample fetch, not via imageLoad/imageStore) then I either need to: 1. Switch to a dummy FBO, something that is mentioned nowhere: neither on manuals or online documentation, wikis, or tutorials; also it's not mentioned to be a guarantee in the spec either. 2. Use a function that was added in OpenGL 4.5; when Compute Shaders were added in 4.3. I may be misinterpreting the spec; but these solution don't make any sense. Best case Mesa should detect that I am trying to read from an FBO in a compute shader being dispatched and issue a barrier for me; worst case one of the functionality already present in 4.3 (like glMemoryBarrier) that doesn't look esoteric (like switching FBOs) should be enough to synchronize. I rather be told that I am wrong in how to interpret glMemoryBarrier, and that I should be calling glMemoryBarrier( GL_FRAMEBUFFER_BARRIER_BIT ); because this and that. After careful thought; I realized I don't need to switch to a dummy FBO; just unset the current one. That DOES make sense to me as while the FBO is bound and I'm executing the compute shader that reads from it, I could as well be telling the driver to do both things at the same time. I could see why unsetting the FBO would flush the necessary caches (due to OpenGL guarantees, I'm saying "I'm done writing to this FBO" and now reads should be guaranteed to be correct), so I'm going to take this for an answer. I don't want to spend much more time on this matter either. Thanks. (btw unsetting the FBO indeed fixes the issue) |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.