Bug 79848 - vaDeriveImage returns wrong fourcc after having created a vpp context
Summary: vaDeriveImage returns wrong fourcc after having created a vpp context
Status: RESOLVED FIXED
Alias: None
Product: libva
Classification: Unclassified
Component: intel (show other bugs)
Version: unspecified
Hardware: Other All
: medium normal
Assignee: PengChen
QA Contact: Sean V Kelley
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-06-09 18:28 UTC by Rainer Hochecker
Modified: 2017-01-12 09:44 UTC (History)
3 users (show)

See Also:
i915 platform:
i915 features:


Attachments
libva trace log (678.38 KB, text/plain)
2014-08-15 15:41 UTC, Rainer Hochecker
Details

Description Rainer Hochecker 2014-06-09 18:28:49 UTC
I run postprocessing on a different thread than decoding. On init I test what deinterlacing methods are available. In order to achieve this I need to create a vpp context. see here: https://github.com/FernetMenta/xbmc/blob/vpp/xbmc/cores/dvdplayer/DVDCodecs/Video/VAAPI.cpp#L2234

For testing some other option I create a derived image:
https://github.com/FernetMenta/xbmc/blob/vpp/xbmc/cores/dvdplayer/DVDCodecs/Video/VAAPI.cpp#L2659

The image reports YUY2 instead of NV12 which is wrong. bits_per_pixel is 12. If this occurs I get video corruption on all vpp deinterlacing methods. progressive is ok.

If I skip testing deriveImage all is ok. (apart from not working madi/maci)

If I test deriveImage prior to opening the vpp context the first time, it works as well. (no madi/maci)

Looks like something corrupts internal state of libva.
Comment 1 Lizhong 2014-07-30 02:06:56 UTC
   Can you check what format of surface you want to derive? It is nv12 or yuy2?
Comment 2 Rainer Hochecker 2014-07-30 05:49:18 UTC
I want a NV12
Comment 3 Lizhong 2014-07-30 05:53:05 UTC
What I mean is that maybe the original surface format is yuy2 due some reason before you call vaDeriveImage.
Comment 4 Rainer Hochecker 2014-07-30 05:59:46 UTC
The result does not make any sense. YUV2 does not have 12 bit. A VPP context should not influence vaDeriveImage, right.
Comment 5 Lizhong 2014-07-30 06:49:09 UTC
As my understandint, 2 bugs you found when call vaDeriveImage:
1. a nv12 format surface derived to be yuy2 format image after call vaDeriveImage.
2. the bits_per_pixel of this yuy2 format image is 12. 

Am I right?
If yes, for 2, I agree with you, bits_per_pixel of yuy2 is 16, instead of 12. we will fix it. 
for 1, I think maybe the original surface format is yuy2(not nv12) due some reason before you call vaDeriveImage.
Comment 6 Rainer Hochecker 2014-07-30 07:36:00 UTC
> for 1, I think maybe the original surface format is yuy2(not nv12) due some reason before you call vaDeriveImage.

absolutely not. as I already mentioned, if I do the vaDeriveImage before opening the VPP context, all works well. I think the the VPP context corrupts the state of vaapi somehow. The may even more thinks being wrong after this happens.

Also I think vaapi has an issue with threading. We run VPP on a separate thread which may also be the reason why madi/maci fails on HSW.
Comment 7 Lizhong 2014-08-07 02:51:15 UTC
Could you add libva trace log files?
Comment 8 Rainer Hochecker 2014-08-07 07:50:20 UTC
will do when back from holiday next week
Comment 9 Rainer Hochecker 2014-08-15 15:41:22 UTC
Created attachment 104684 [details]
libva trace log

find attached the trace log.
Comment 10 Peter Frühberger 2015-09-25 21:40:56 UTC
Are there any news?

Could you reproduce?
Comment 11 nfxjfg 2015-09-26 12:03:46 UTC
I'm getting something that seems related. It's like the underlying surface format changes on the fly in some situations. I'm using the following code to check the underlying surface format, and I've littered my code with calls to it:

    VAImage tmp;
    VAStatus status = vaDeriveImage(va_display, surface_id, &tmp);
    if (status == VA_STATUS_SUCCESS) {
        printf("0x%x: %s\n", (int)s->id, VA_STR_FOURCC(tmp.format.fourcc));
        vaDestroyImage(va_display, tmp.image_id);
    }

I've observed that surfaces, when they're created, have the format YV12. But after letting the decoder render to them (via ffmpeg's vaapi hwaccel support), the format seems to have changed to NV12.

Further, if I create a VPP context, even newly allocated surfaces have the format NV12.

I'm thinking of always creating a dummy VPP context as a workaround. (In my application I'd like to know the surface format in advance, to avoid the need for additional, most likely rare and error prone reconfiguration at runtime.)

All surfaces in my application are created like this:

    vaCreateSurfaces(va_display, VA_RT_FORMAT_YUV420, w, h, &id, 1, NULL, 0);

This is on Mesa 11, Broadwell, liva 0.38.0, kernel 4.2.1.
Comment 12 nfxjfg 2015-09-26 12:12:23 UTC
PS: this is the VPP code which "fixes" it. Doing this before surface allocation makes sure all the following surfaces are allocated as NV12:

    VAConfigID config;
    vaCreateConfig(va_display, VAProfileNone, VAEntrypointVideoProc, NULL, 0, &config);
    VAContextID context;
    vaCreateContext(va_display, config, 0, 0, 0, NULL, 0, &context);

Destroying the context before allocating surfaces restores the weird behavior (then the surfaces are allocated as YV12 again, and turn into NV12 on rendering).
Comment 13 Peter Frühberger 2015-09-28 09:10:52 UTC
This bug might be related: https://bugs.freedesktop.org/show_bug.cgi?id=92088

Patch was just sent to the libva ML.
Comment 14 haihao 2015-11-23 16:46:45 UTC
Can you reproduce your issue with the latest driver? The patch mention in comment #13 has been merged into master branch.
Comment 15 haihao 2016-12-07 03:08:35 UTC
Patch have been merged and no response over 1 year.
Comment 16 Rainer Hochecker 2016-12-10 10:08:56 UTC
Kodi has implemented the work-around as mentioned and as a result the issue did not show anymore. I will touch this code shortly when implementing hevc 10bit. Then I'll test this again.
Comment 17 nfxjfg 2017-01-12 09:44:30 UTC
Still seeing this in libva 1.7.3 (0.39.4) with the i965 driver, kernel 4.8.13, Mesa 13.0.2. Is that new enough?

It's possible that this is not a problem for me anymore, since ffmpeg's new vaapi code allocates surfaces with VASurfaceAttribPixelFormat set, which supposedly means "there isn't any opportunity for it to screw around".

Sorry for the late reply.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.