I run postprocessing on a different thread than decoding. On init I test what deinterlacing methods are available. In order to achieve this I need to create a vpp context. see here: https://github.com/FernetMenta/xbmc/blob/vpp/xbmc/cores/dvdplayer/DVDCodecs/Video/VAAPI.cpp#L2234 For testing some other option I create a derived image: https://github.com/FernetMenta/xbmc/blob/vpp/xbmc/cores/dvdplayer/DVDCodecs/Video/VAAPI.cpp#L2659 The image reports YUY2 instead of NV12 which is wrong. bits_per_pixel is 12. If this occurs I get video corruption on all vpp deinterlacing methods. progressive is ok. If I skip testing deriveImage all is ok. (apart from not working madi/maci) If I test deriveImage prior to opening the vpp context the first time, it works as well. (no madi/maci) Looks like something corrupts internal state of libva.
Can you check what format of surface you want to derive? It is nv12 or yuy2?
I want a NV12
What I mean is that maybe the original surface format is yuy2 due some reason before you call vaDeriveImage.
The result does not make any sense. YUV2 does not have 12 bit. A VPP context should not influence vaDeriveImage, right.
As my understandint, 2 bugs you found when call vaDeriveImage: 1. a nv12 format surface derived to be yuy2 format image after call vaDeriveImage. 2. the bits_per_pixel of this yuy2 format image is 12. Am I right? If yes, for 2, I agree with you, bits_per_pixel of yuy2 is 16, instead of 12. we will fix it. for 1, I think maybe the original surface format is yuy2(not nv12) due some reason before you call vaDeriveImage.
> for 1, I think maybe the original surface format is yuy2(not nv12) due some reason before you call vaDeriveImage. absolutely not. as I already mentioned, if I do the vaDeriveImage before opening the VPP context, all works well. I think the the VPP context corrupts the state of vaapi somehow. The may even more thinks being wrong after this happens. Also I think vaapi has an issue with threading. We run VPP on a separate thread which may also be the reason why madi/maci fails on HSW.
Could you add libva trace log files?
will do when back from holiday next week
Created attachment 104684 [details] libva trace log find attached the trace log.
Are there any news? Could you reproduce?
I'm getting something that seems related. It's like the underlying surface format changes on the fly in some situations. I'm using the following code to check the underlying surface format, and I've littered my code with calls to it: VAImage tmp; VAStatus status = vaDeriveImage(va_display, surface_id, &tmp); if (status == VA_STATUS_SUCCESS) { printf("0x%x: %s\n", (int)s->id, VA_STR_FOURCC(tmp.format.fourcc)); vaDestroyImage(va_display, tmp.image_id); } I've observed that surfaces, when they're created, have the format YV12. But after letting the decoder render to them (via ffmpeg's vaapi hwaccel support), the format seems to have changed to NV12. Further, if I create a VPP context, even newly allocated surfaces have the format NV12. I'm thinking of always creating a dummy VPP context as a workaround. (In my application I'd like to know the surface format in advance, to avoid the need for additional, most likely rare and error prone reconfiguration at runtime.) All surfaces in my application are created like this: vaCreateSurfaces(va_display, VA_RT_FORMAT_YUV420, w, h, &id, 1, NULL, 0); This is on Mesa 11, Broadwell, liva 0.38.0, kernel 4.2.1.
PS: this is the VPP code which "fixes" it. Doing this before surface allocation makes sure all the following surfaces are allocated as NV12: VAConfigID config; vaCreateConfig(va_display, VAProfileNone, VAEntrypointVideoProc, NULL, 0, &config); VAContextID context; vaCreateContext(va_display, config, 0, 0, 0, NULL, 0, &context); Destroying the context before allocating surfaces restores the weird behavior (then the surfaces are allocated as YV12 again, and turn into NV12 on rendering).
This bug might be related: https://bugs.freedesktop.org/show_bug.cgi?id=92088 Patch was just sent to the libva ML.
Can you reproduce your issue with the latest driver? The patch mention in comment #13 has been merged into master branch.
Patch have been merged and no response over 1 year.
Kodi has implemented the work-around as mentioned and as a result the issue did not show anymore. I will touch this code shortly when implementing hevc 10bit. Then I'll test this again.
Still seeing this in libva 1.7.3 (0.39.4) with the i965 driver, kernel 4.8.13, Mesa 13.0.2. Is that new enough? It's possible that this is not a problem for me anymore, since ffmpeg's new vaapi code allocates surfaces with VASurfaceAttribPixelFormat set, which supposedly means "there isn't any opportunity for it to screw around". Sorry for the late reply.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.