Environment: -------------- Platform: SNB IVB Libva: staging commit 8865fd148a4444b26c35197b3176f87fb2e85393 Intel-driver: staging commit c0ef9d99df37ae45589fecb898727be495e50304 Gst-vaapi: qa commit df1eb6e01e1bf76e23c8eb7d219f15804b109d2d Bug Info: -------------- Decoding below files will cause coredump. MR3_TANDBERG_B.264 MR4_TANDBERG_C.264 MR5_TANDBERG_C.264 Reproduce steps: ---------------- 1. xinit& 2. gst-launch-0.10 filesrc location=/home/AVC_conformance/Base_Ext_Main_profile/MR3_TANDBERG_B.264 ! h264parse ! vaapidecode ! vaapisink sync=false
Issue can be reproduced on HSW
Created attachment 75667 [details] 0001-h264-fix-reference-list-count-less-than-num_ref
Created attachment 75668 [details] 0002-h264-remove-reference-if-picture-frame_num-is-same
Created attachment 75669 [details] 0003-h264-support-process-for-gaps-in-frame_num
0001...patch is to fix the coredump issue. RefPicListx_count is less than the num_ref_listx because there's not enough reference pictures in dpb. After 0001 installed, the coredump is gone but last few frames displayed with mosaic blocks. The reason is gstreamer-vaapi doesn't process gaps_in_frame_num_value_allow_flag. 002...patch and 0003...patch is going to process 'non-exist' pictures and slide the reference window.
gst-vaapi: qa branch 41940904625f7a3dbdf88d4d1f9fe2df4c434286 Issue can't be reproduced. close it.
It can be reproduced again, both on PRC's tree and Gwenole's tree
(In reply to comment #7) > It can be reproduced again, both on PRC's tree and Gwenole's tree MR3_TANDBERG_B.264 is a baseline profile. since baseline removed from gst-vaapi. so this file can't be tested now. I'll post a workaround to test baseline here.
Created attachment 91403 [details] [review] h264-dec-workaround-to-enable-baseline-profile workaround to re-enable H.264 baseline profile to do more tests. At least this bug need enable baseline to test gaps in frame_num
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.