Bug 70334 - [bisected]igt/module_reload causes system hang on queued branch
Summary: [bisected]igt/module_reload causes system hang on queued branch
Status: CLOSED FIXED
Alias: None
Product: DRI
Classification: Unclassified
Component: DRM/Intel (show other bugs)
Version: unspecified
Hardware: All Linux (All)
: high critical
Assignee: Chris Wilson
QA Contact: Intel GFX Bugs mailing list
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-10-10 07:40 UTC by fangxun
Modified: 2017-09-04 10:16 UTC (History)
1 user (show)

See Also:
i915 platform:
i915 features:


Attachments
netconsole log (19.09 KB, text/plain)
2013-10-10 07:40 UTC, fangxun
no flags Details
netconsole wiht debug build (20.00 KB, text/plain)
2013-10-11 08:50 UTC, fangxun
no flags Details

Description fangxun 2013-10-10 07:40:26 UTC
Created attachment 87373 [details]
netconsole log

System Environment:
--------------------------
Platform: Ivybridge
Kernel: (drm-intel-nightly)c5ea23067eb3f0bc86dea95b8f544c7ef8dfea54

Bug detailed description:
-----------------------------
System hangs when unloading i915 module. It happens on all platforms on kernel queued branch. Bisect shows b29c19b645287f7062e17d70fa4e9781a01a5d88 is the first bad commit.
commit b29c19b645287f7062e17d70fa4e9781a01a5d88
Author:     Chris Wilson <chris@chris-wilson.co.uk>
AuthorDate: Wed Sep 25 17:34:56 2013 +0100
Commit:     Daniel Vetter <daniel.vetter@ffwll.ch>
CommitDate: Thu Oct 3 20:01:31 2013 +0200
    drm/i915: Boost RPS frequency for CPU stalls

    If we encounter a situation where the CPU blocks waiting for results
    from the GPU, give the GPU a kick to boost its the frequency.

    This should work to reduce user interface stalls and to quickly promote
    mesa to high frequencies - but the cost is that our requested frequency
    stalls high (as we do not idle for long enough before rc6 to start
    reducing frequencies, nor are we aggressive at down clocking an
    underused GPU). However, this should be mitigated by rc6 itself powering
    off the GPU when idle, and that energy use is dependent upon the workload
    of the GPU in addition to its frequency (e.g. the math or sampler
    functions only consume power when used). Still, this is likely to
    adversely affect light workloads.

    In particular, this nearly eliminates the highly noticeable wake-up lag
    in animations from idle. For example, expose or workspace transitions.
    (However, given the situation where we fail to downclock, our requested
    frequency is almost always the maximum, except for Baytrail where we
    manually downclock upon idling. This often masks the latency of
    upclocking after being idle, so animations are typically smooth - at the
    cost of increased power consumption.)

    Stéphane raised the concern that this will punish good applications and
    reward bad applications - but due to the nature of how mesa performs its
    client throttling, I believe all mesa applications will be roughly
    equally affected. To address this concern, and to prevent applications
    like compositors from permanently boosting the RPS state, we ratelimit the
    frequency of the wait-boosts each client recieves.

    Unfortunately, this techinique is ineffective with Ironlake - which also
    has dynamic render power states and suffers just as dramatically. For
    Ironlake, the thermal/power headroom is shared with the CPU through
    Intelligent Power Sharing and the intel-ips module. This leaves us with
    no GPU boost frequencies available when coming out of idle, and due to
    hardware limitations we cannot change the arbitration between the CPU and
    GPU quickly enough to be effective.

    v2: Limit each client to receiving a single boost for each active period.
        Tested by QA to only marginally increase power, and to demonstrably
        increase throughput in games. No latency measurements yet.

    v3: Cater for front-buffer rendering with manual throttling.

    v4: Tidy up.

    v5: Sadly the compositor needs frequent boosts as it may never idle, but
    due to its picking mechanism (using ReadPixels) may require frequent
    waits. Those waits, along with the waits for the vrefresh swap, conspire
    to keep the GPU at low frequencies despite the interactive latency. To
    overcome this we ditch the one-boost-per-active-period and just ratelimit
    the number of wait-boosts each client can receive.

Reproduce steps:
----------------------------
1. ./module_reload
Comment 1 Daniel Vetter 2013-10-10 09:18:36 UTC
Let's hope Chris' isp doesn't eat this update ;-)

I've reviewed the patch yesterday and didn't spot where things go south ...
Comment 2 Chris Wilson 2013-10-10 11:08:30 UTC
I haven't spotted a left-over timer/delayed-task that could cause this (the pc8 is hsw ulx not ivb).
Comment 3 Chris Wilson 2013-10-10 11:09:03 UTC
(In reply to comment #1)
> Let's hope Chris' isp doesn't eat this update ;-)

It did, so therefore this reassignment never happened. Please try again latter ;-)
Comment 4 Chris Wilson 2013-10-10 11:13:16 UTC
My hypothesis is that we have a stray fd (or missing drm_release()) during shutdown, and so a leftover active file_priv->idle_work.
Comment 5 Daniel Vetter 2013-10-10 12:54:23 UTC
Fang, can you please try to reproduce this on a debug kernel build? The object debug system should tell us more where exactly we're releasing a time which is still live ...
Comment 6 Chris Wilson 2013-10-10 12:55:12 UTC
Actually pc8.enable_work is active on all platforms. So please test https://patchwork.kernel.org/patch/3014981/
Comment 7 Chris Wilson 2013-10-10 13:35:52 UTC
Do mind confirming the platform you see the issue on?
Comment 8 fangxun 2013-10-11 08:50:06 UTC
Created attachment 87432 [details]
netconsole wiht debug build
Comment 9 Daniel Vetter 2013-10-11 09:36:44 UTC
Oh, there's a lockdep splat:

[  107.008146] ======================================================
[  107.008146] [ INFO: possible circular locking dependency detected ]
[  107.008147] 3.12.0-rc4_nightlytop_55d077_debug_20131011_+ #733 Not tainted
[  107.008148] -------------------------------------------------------

This is a separate bug from the hang (which follows later on). Can you please file a new bug report for this issue? The above dmesg snippet is the important thing (i.e. the "[ INFO: possible circular locking dependency detected ]" when running module_reload). Also please check whether this is a regression (I don't don't remember seeing this on my own debug kernel builds) and if so please try to bisect where this has been introduced.
Comment 10 fangxun 2013-10-11 10:00:16 UTC
(In reply to comment #6)
> Actually pc8.enable_work is active on all platforms. So please test
> https://patchwork.kernel.org/patch/3014981/

I don't see "intel_modeset_quiesce", but see "cancel_delayed_work_sync(&dev_priv->pc8.enable_work)" in "hsw_disable_package_c8()" on lastest drm-intel-nightly branch. The issue still happens with that.

(In reply to comment #7)
> Do mind confirming the platform you see the issue on?
The issue happens on pineview, gm45, ironlake, sandybridge, ivybridge and Haswell.
Comment 11 Chris Wilson 2013-10-14 09:29:52 UTC
Should be fixed by https://patchwork.kernel.org/patch/3021681/
Comment 12 fangxun 2013-10-16 09:06:09 UTC
(In reply to comment #9)
> Oh, there's a lockdep splat:

[  107.008146]
> ======================================================
[  107.008146] [
> INFO: possible circular locking dependency detected ]
[  107.008147]
> 3.12.0-rc4_nightlytop_55d077_debug_20131011_+ #733 Not tainted
[ 
> 107.008148] -------------------------------------------------------

This is
> a separate bug from the hang (which follows later on). Can you please file a
> new bug report for this issue? The above dmesg snippet is the important
> thing (i.e. the "[ INFO: possible circular locking dependency detected ]"
> when running module_reload). Also please check whether this is a regression
> (I don't don't remember seeing this on my own debug kernel builds) and if so
> please try to bisect where this has been introduced.

bug 70523 was reported for tracing this issue.
Comment 13 fangxun 2013-10-16 09:09:41 UTC
(In reply to comment #11)
> Should be fixed by https://patchwork.kernel.org/patch/3021681/

Verified it fixed the bug.
Comment 14 Chris Wilson 2013-10-16 11:55:04 UTC
commit 8d6a7791a8b72dea5773271f23a1460e1eee27dd
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Wed Oct 16 11:50:01 2013 +0100

    drm/i915: Disable all GEM timers and work on unload
    
    We have two once very similar functions, i915_gpu_idle() and
    i915_gem_idle(). The former is used as the lower level operation to
    flush work on the GPU, whereas the latter is the high level interface to
    flush the GEM bookkeeping in addition to flushing the GPU. As such
    i915_gem_idle() also clears out the request and activity lists and
    cancels the delayed work. This is what we need for unloading the driver,
    unfortunately we called i915_gpu_idle() instead.
    
    In the process, make sure that when cancelling the delayed work and
    timer, which is synchronous, that we do not hold any locks to prevent a
    deadlock if the work item is already waiting upon the mutex. This
    requires us to push the mutex down from the caller to i915_gem_idle().
    
    v2: s/i915_gem_idle/i915_gem_suspend/
    
    Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70334
    Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
    Tested-by: xunx.fang@intel.com
    Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Comment 15 lu hua 2013-11-11 07:07:52 UTC
Verified.Fixed.
Comment 16 Jari Tahvanainen 2017-09-04 10:16:44 UTC
Closing old verified+fixed.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.