Bug 110519 - [CI][SHARDS] igt@gem_exec_schedule@semaphore-resolve - fail - Failed assertion: !"GPU hung"
Summary: [CI][SHARDS] igt@gem_exec_schedule@semaphore-resolve - fail - Failed assertio...
Status: RESOLVED FIXED
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: XOrg git
Hardware: Other All
: high normal
Assignee: Intel GFX Bugs mailing list
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard: ReadyForDev
Keywords:
Depends on:
Blocks:
 
Reported: 2019-04-26 06:18 UTC by Martin Peres
Modified: 2019-07-02 11:43 UTC (History)
1 user (show)

See Also:
i915 platform: ALL
i915 features: GEM/Other


Attachments

Description Martin Peres 2019-04-26 06:18:18 UTC
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-apl3/igt@gem_exec_schedule@semaphore-resolve.html

Starting subtest: semaphore-resolve
(gem_exec_schedule:1877) igt_aux-CRITICAL: Test assertion failure function sig_abort, file ../lib/igt_aux.c:501:
(gem_exec_schedule:1877) igt_aux-CRITICAL: Failed assertion: !"GPU hung"
Comment 1 CI Bug Log 2019-04-26 06:18:55 UTC
The CI Bug Log issue associated to this bug has been updated.

### New filters associated

* All machines: igt@gem_exec_schedule@semaphore-resolve - fail - Failed assertion: !"GPU hung"
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2903/shard-apl1/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2903/shard-glk2/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2903/shard-iclb6/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2903/shard-kbl1/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-apl3/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-glk8/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-iclb7/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-kbl6/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4966/shard-skl1/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2919/shard-apl5/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2919/shard-glk1/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2919/shard-iclb8/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2919/shard-kbl1/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6000/shard-apl8/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6000/shard-glk2/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6000/shard-iclb3/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6000/shard-kbl3/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6000/shard-skl7/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2920/shard-apl8/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2920/shard-glk9/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2920/shard-iclb4/igt@gem_exec_schedule@semaphore-resolve.html
  - https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_2920/shard-kbl3/igt@gem_exec_schedule@semaphore-resolve.html
Comment 3 Francesco Balestrieri 2019-06-03 05:35:26 UTC
The patch that fixes this is getting closer, only 9 to go...
Comment 4 Jani Saarinen 2019-06-03 15:03:53 UTC
What 9? CI runs or what? 
This also seen https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6179/re-icl-u/igt@gem_exec_schedule@semaphore-resolve.html but good if this is already in pipeline.
Comment 5 Chris Wilson 2019-06-03 15:05:28 UTC
Patches to go.
Comment 6 Martin Peres 2019-06-17 05:32:41 UTC
(In reply to Chris Wilson from comment #5)
> Patches to go.

Can this explain why we are now failing with dmesg-fail? See https://bugs.freedesktop.org/show_bug.cgi?id=110927
Comment 7 Chris Wilson 2019-06-20 18:11:11 UTC
commit 8ee36e048c98d4015804a23f884be2576f778a93 (HEAD -> drm-intel-next-queued, drm-intel/for-linux-next, drm-intel/drm-intel-next-queued)
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Thu Jun 20 15:20:52 2019 +0100

    drm/i915/execlists: Minimalistic timeslicing
    
    If we have multiple contexts of equal priority pending execution,
    activate a timer to demote the currently executing context in favour of
    the next in the queue when that timeslice expires. This enforces
    fairness between contexts (so long as they allow preemption -- forced
    preemption, in the future, will kick those who do not obey) and allows
    us to avoid userspace blocking forward progress with e.g. unbounded
    MI_SEMAPHORE_WAIT.
    
    For the starting point here, we use the jiffie as our timeslice so that
    we should be reasonably efficient wrt frequent CPU wakeups.
    
    Testcase: igt/gem_exec_scheduler/semaphore-resolve
    Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
    Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
    Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-2-chris@chris-wilson.co.uk
Comment 8 Martin Peres 2019-07-02 11:42:52 UTC
(In reply to Chris Wilson from comment #7)
> commit 8ee36e048c98d4015804a23f884be2576f778a93 (HEAD ->
> drm-intel-next-queued, drm-intel/for-linux-next,
> drm-intel/drm-intel-next-queued)
> Author: Chris Wilson <chris@chris-wilson.co.uk>
> Date:   Thu Jun 20 15:20:52 2019 +0100
> 
>     drm/i915/execlists: Minimalistic timeslicing
>     
>     If we have multiple contexts of equal priority pending execution,
>     activate a timer to demote the currently executing context in favour of
>     the next in the queue when that timeslice expires. This enforces
>     fairness between contexts (so long as they allow preemption -- forced
>     preemption, in the future, will kick those who do not obey) and allows
>     us to avoid userspace blocking forward progress with e.g. unbounded
>     MI_SEMAPHORE_WAIT.
>     
>     For the starting point here, we use the jiffie as our timeslice so that
>     we should be reasonably efficient wrt frequent CPU wakeups.
>     
>     Testcase: igt/gem_exec_scheduler/semaphore-resolve
>     Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>     Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
>     Link:
> https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-2-
> chris@chris-wilson.co.uk

Thanks, it definitely did the trick!
Comment 9 CI Bug Log 2019-07-02 11:43:02 UTC
The CI Bug Log issue associated to this bug has been archived.

New failures matching the above filters will not be associated to this bug anymore.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.