Bug 95034 - vkResetCommandPool should not destroy the command buffers.
Summary: vkResetCommandPool should not destroy the command buffers.
Status: RESOLVED FIXED
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/Vulkan/intel (show other bugs)
Version: git
Hardware: x86-64 (AMD64) Linux (All)
: medium major
Assignee: Jason Ekstrand
QA Contact: Intel 3D Bugs Mailing List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-04-20 15:02 UTC by Ronie Salgado
Modified: 2016-06-06 00:40 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments
Bug workaround patch (1.59 KB, text/plain)
2016-04-20 15:02 UTC, Ronie Salgado
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Ronie Salgado 2016-04-20 15:02:01 UTC
Created attachment 123092 [details]
Bug workaround patch

According to the Vulkan spec and a FIXME comment present in anv_ResetCommandPool (anv_cmd_buffer.c:~1100), vkResetCommandPool should not destroy the command buffers referenced by the command.

The destruction of the command buffers breaks the samples of an open source abstraction layer above Vulkan that I am making. The abstraction layer along with the samples and build instructions are available at: https://github.com/ronsaldo/abstract-gpu current master whose commit number is 04255a7a93812d94d1b8683b13f67558c7b76e54 .

The attached patch contains a quick workaround this problem that at least prevent the samples from crashing. The patch should be checked for potential memory leaks.
Comment 1 Jason Ekstrand 2016-05-24 05:16:49 UTC
Sorry for not getting back to this sooner.  Somehow it missed my radar.  You are correct that our implementation is invalid.  The patch you attached isn't a workaround it is in fact the correct patch.  Please send it to the mesa-dev mailing list and I'll make sure it gets reviewed and committed.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.