Bug 105226 - [anv] vulkaninfo > Haswell Vulkan support is incomplete
Summary: [anv] vulkaninfo > Haswell Vulkan support is incomplete
Status: RESOLVED INVALID
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/Vulkan/intel (show other bugs)
Version: git
Hardware: x86-64 (AMD64) Linux (All)
: medium normal
Assignee: Intel 3D Bugs Mailing List
QA Contact: Intel 3D Bugs Mailing List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2018-02-23 11:43 UTC by Darius Spitznagel
Modified: 2019-06-13 08:48 UTC (History)
3 users (show)

See Also:
i915 platform:
i915 features:


Attachments
[PATCH] anv/cmd_buffer: Reuse gen8 Cmd{Set,Reset}Event on gen7 (7.78 KB, patch)
2019-06-08 12:54 UTC, Ville Syrjala
Details | Splinter Review

Description Darius Spitznagel 2018-02-23 11:43:32 UTC
Hello devs,

what exactly is missing for Haswell to show full vulkan support?
And if a feature/extension is missing is someone working on it?
Comment 1 Mark Janes 2018-02-23 18:20:09 UTC
My understanding is that HSW is lacking hardware features.
Comment 2 Jason Ekstrand 2018-02-23 18:56:14 UTC
(In reply to Mark Janes from comment #1)
> My understanding is that HSW is lacking hardware features.

That's not quite true.  I believe Haswell is capable of doing everything required for Vulkan but there are some corner cases it doesn't quite handle right.  In particular:

 1) Haswell doesn't support stencil texturing
 2) Haswell border colors are bizarrely format-dependent and some sort of shader work-around would be neede for them to work correctly.
 3) vkCmdWaitEvents is not implemented because Haswell lacks a memory-based MI_SEMAPHORE_WAIT.  This could probably be worked around by doing a busy-loop in the command streamer.

On Ivybridge and Bay Trail, there are a couple other issues:

 4) No support for texture swizzle so channel orderings just don't work.
 5) Integer border color basically doesn't work at all.  The only fix is very painful shader workarounds.

That's all I know of off-hnand and it's probably all fixable.  However, we (Intel) have no commitment to Vulkan on Haswell hardware and earlier beyond trying to avoid breaking it further.  If some community member wants to take on an item on that TODO list, I'd be happy to review and help land the patches but they're all fairly annoying things to fix.  (I've already fixed all the easy stuff.)
Comment 3 Richard Yao 2018-10-11 11:48:02 UTC
The DXVK pointed out that Haswell lacks bounds depth support. Which item would that be?
Comment 4 Richard Yao 2018-10-11 11:48:20 UTC
I meant DXVK developer. Sorry for the typo.
Comment 5 Ville Syrjala 2019-06-08 12:54:58 UTC
Created attachment 144483 [details] [review]
[PATCH] anv/cmd_buffer: Reuse gen8 Cmd{Set,Reset}Event on gen7

This might be enough to DXVK going again on gen7. But I was too lazy to test it myself.
Comment 6 Lionel Landwerlin 2019-06-08 14:17:35 UTC
(In reply to Ville Syrjala from comment #5)
> Created attachment 144483 [details] [review] [review]
> [PATCH] anv/cmd_buffer: Reuse gen8 Cmd{Set,Reset}Event on gen7
> 
> This might be enough to DXVK going again on gen7. But I was too lazy to test
> it myself.

I think we need MI_SEMAPHORE_WAIT too in vkCmdWaitEvents.
Comment 7 Ville Syrjala 2019-06-08 14:27:10 UTC
(In reply to Lionel Landwerlin from comment #6)
> (In reply to Ville Syrjala from comment #5)
> > Created attachment 144483 [details] [review] [review] [review]
> > [PATCH] anv/cmd_buffer: Reuse gen8 Cmd{Set,Reset}Event on gen7
> > 
> > This might be enough to DXVK going again on gen7. But I was too lazy to test
> > it myself.
> 
> I think we need MI_SEMAPHORE_WAIT too in vkCmdWaitEvents.

I didn't see that used in dxvk code. Also we don't have it on gen7 so would need something a bit more crazy. Recursive batch with MI_COND_BB_END? Or would there be a better way to achieve this?
Comment 8 Lionel Landwerlin 2019-06-08 14:46:29 UTC
(In reply to Ville Syrjala from comment #7)
> (In reply to Lionel Landwerlin from comment #6)
> > (In reply to Ville Syrjala from comment #5)
> > > Created attachment 144483 [details] [review] [review] [review] [review]
> > > [PATCH] anv/cmd_buffer: Reuse gen8 Cmd{Set,Reset}Event on gen7
> > > 
> > > This might be enough to DXVK going again on gen7. But I was too lazy to test
> > > it myself.
> > 
> > I think we need MI_SEMAPHORE_WAIT too in vkCmdWaitEvents.
> 
> I didn't see that used in dxvk code. Also we don't have it on gen7 so would
> need something a bit more crazy. Recursive batch with MI_COND_BB_END? Or
> would there be a better way to achieve this?

Yeah I guess this is the way to do it.
Comment 9 Panda_Wrist 2019-06-08 14:54:06 UTC
(In reply to Lionel Landwerlin from comment #8)
> (In reply to Ville Syrjala from comment #7)
> > (In reply to Lionel Landwerlin from comment #6)
> > > (In reply to Ville Syrjala from comment #5)
> > > > Created attachment 144483 [details] [review] [review] [review] [review] [review]
> > > > [PATCH] anv/cmd_buffer: Reuse gen8 Cmd{Set,Reset}Event on gen7
> > > > 
> > > > This might be enough to DXVK going again on gen7. But I was too lazy to test
> > > > it myself.
> > > 
> > > I think we need MI_SEMAPHORE_WAIT too in vkCmdWaitEvents.
> > 
> > I didn't see that used in dxvk code. Also we don't have it on gen7 so would
> > need something a bit more crazy. Recursive batch with MI_COND_BB_END? Or
> > would there be a better way to achieve this?
> 
> Yeah I guess this is the way to do it.

I can confirm that that patch is enough to get DXVK going again on gen7. I tried it out on my Iris pro 6200 and was able to run Grandia II with the latest DXVK commit.
Comment 10 Jason Ekstrand 2019-06-08 15:07:25 UTC
I think that patch is mostly fine.  No sane application actually relies on waiting on a vkSetEvent from the CPU.  My only comment is that if we're going to move most of event handling to genX_cmd_buffer.c, we should move all of it and just put the MI_SEMAPHORE_WAIT behind a #if GEN_GEN >= 8 with the anv_finishme() in the #else.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.