Created attachment 87373 [details] netconsole log System Environment: -------------------------- Platform: Ivybridge Kernel: (drm-intel-nightly)c5ea23067eb3f0bc86dea95b8f544c7ef8dfea54 Bug detailed description: ----------------------------- System hangs when unloading i915 module. It happens on all platforms on kernel queued branch. Bisect shows b29c19b645287f7062e17d70fa4e9781a01a5d88 is the first bad commit. commit b29c19b645287f7062e17d70fa4e9781a01a5d88 Author: Chris Wilson <chris@chris-wilson.co.uk> AuthorDate: Wed Sep 25 17:34:56 2013 +0100 Commit: Daniel Vetter <daniel.vetter@ffwll.ch> CommitDate: Thu Oct 3 20:01:31 2013 +0200 drm/i915: Boost RPS frequency for CPU stalls If we encounter a situation where the CPU blocks waiting for results from the GPU, give the GPU a kick to boost its the frequency. This should work to reduce user interface stalls and to quickly promote mesa to high frequencies - but the cost is that our requested frequency stalls high (as we do not idle for long enough before rc6 to start reducing frequencies, nor are we aggressive at down clocking an underused GPU). However, this should be mitigated by rc6 itself powering off the GPU when idle, and that energy use is dependent upon the workload of the GPU in addition to its frequency (e.g. the math or sampler functions only consume power when used). Still, this is likely to adversely affect light workloads. In particular, this nearly eliminates the highly noticeable wake-up lag in animations from idle. For example, expose or workspace transitions. (However, given the situation where we fail to downclock, our requested frequency is almost always the maximum, except for Baytrail where we manually downclock upon idling. This often masks the latency of upclocking after being idle, so animations are typically smooth - at the cost of increased power consumption.) Stéphane raised the concern that this will punish good applications and reward bad applications - but due to the nature of how mesa performs its client throttling, I believe all mesa applications will be roughly equally affected. To address this concern, and to prevent applications like compositors from permanently boosting the RPS state, we ratelimit the frequency of the wait-boosts each client recieves. Unfortunately, this techinique is ineffective with Ironlake - which also has dynamic render power states and suffers just as dramatically. For Ironlake, the thermal/power headroom is shared with the CPU through Intelligent Power Sharing and the intel-ips module. This leaves us with no GPU boost frequencies available when coming out of idle, and due to hardware limitations we cannot change the arbitration between the CPU and GPU quickly enough to be effective. v2: Limit each client to receiving a single boost for each active period. Tested by QA to only marginally increase power, and to demonstrably increase throughput in games. No latency measurements yet. v3: Cater for front-buffer rendering with manual throttling. v4: Tidy up. v5: Sadly the compositor needs frequent boosts as it may never idle, but due to its picking mechanism (using ReadPixels) may require frequent waits. Those waits, along with the waits for the vrefresh swap, conspire to keep the GPU at low frequencies despite the interactive latency. To overcome this we ditch the one-boost-per-active-period and just ratelimit the number of wait-boosts each client can receive. Reproduce steps: ---------------------------- 1. ./module_reload
Let's hope Chris' isp doesn't eat this update ;-) I've reviewed the patch yesterday and didn't spot where things go south ...
I haven't spotted a left-over timer/delayed-task that could cause this (the pc8 is hsw ulx not ivb).
(In reply to comment #1) > Let's hope Chris' isp doesn't eat this update ;-) It did, so therefore this reassignment never happened. Please try again latter ;-)
My hypothesis is that we have a stray fd (or missing drm_release()) during shutdown, and so a leftover active file_priv->idle_work.
Fang, can you please try to reproduce this on a debug kernel build? The object debug system should tell us more where exactly we're releasing a time which is still live ...
Actually pc8.enable_work is active on all platforms. So please test https://patchwork.kernel.org/patch/3014981/
Do mind confirming the platform you see the issue on?
Created attachment 87432 [details] netconsole wiht debug build
Oh, there's a lockdep splat: [ 107.008146] ====================================================== [ 107.008146] [ INFO: possible circular locking dependency detected ] [ 107.008147] 3.12.0-rc4_nightlytop_55d077_debug_20131011_+ #733 Not tainted [ 107.008148] ------------------------------------------------------- This is a separate bug from the hang (which follows later on). Can you please file a new bug report for this issue? The above dmesg snippet is the important thing (i.e. the "[ INFO: possible circular locking dependency detected ]" when running module_reload). Also please check whether this is a regression (I don't don't remember seeing this on my own debug kernel builds) and if so please try to bisect where this has been introduced.
(In reply to comment #6) > Actually pc8.enable_work is active on all platforms. So please test > https://patchwork.kernel.org/patch/3014981/ I don't see "intel_modeset_quiesce", but see "cancel_delayed_work_sync(&dev_priv->pc8.enable_work)" in "hsw_disable_package_c8()" on lastest drm-intel-nightly branch. The issue still happens with that. (In reply to comment #7) > Do mind confirming the platform you see the issue on? The issue happens on pineview, gm45, ironlake, sandybridge, ivybridge and Haswell.
Should be fixed by https://patchwork.kernel.org/patch/3021681/
(In reply to comment #9) > Oh, there's a lockdep splat: [ 107.008146] > ====================================================== [ 107.008146] [ > INFO: possible circular locking dependency detected ] [ 107.008147] > 3.12.0-rc4_nightlytop_55d077_debug_20131011_+ #733 Not tainted [ > 107.008148] ------------------------------------------------------- This is > a separate bug from the hang (which follows later on). Can you please file a > new bug report for this issue? The above dmesg snippet is the important > thing (i.e. the "[ INFO: possible circular locking dependency detected ]" > when running module_reload). Also please check whether this is a regression > (I don't don't remember seeing this on my own debug kernel builds) and if so > please try to bisect where this has been introduced. bug 70523 was reported for tracing this issue.
(In reply to comment #11) > Should be fixed by https://patchwork.kernel.org/patch/3021681/ Verified it fixed the bug.
commit 8d6a7791a8b72dea5773271f23a1460e1eee27dd Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Oct 16 11:50:01 2013 +0100 drm/i915: Disable all GEM timers and work on unload We have two once very similar functions, i915_gpu_idle() and i915_gem_idle(). The former is used as the lower level operation to flush work on the GPU, whereas the latter is the high level interface to flush the GEM bookkeeping in addition to flushing the GPU. As such i915_gem_idle() also clears out the request and activity lists and cancels the delayed work. This is what we need for unloading the driver, unfortunately we called i915_gpu_idle() instead. In the process, make sure that when cancelling the delayed work and timer, which is synchronous, that we do not hold any locks to prevent a deadlock if the work item is already waiting upon the mutex. This requires us to push the mutex down from the caller to i915_gem_idle(). v2: s/i915_gem_idle/i915_gem_suspend/ Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70334 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: xunx.fang@intel.com Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Verified.Fixed.
Closing old verified+fixed.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.