Installing nvidia-319 removes nvidia-304, but after that rotated displays don't work correctly. Reverting to nvidia-304 resolves the issue. Ubuntu 13.10, recent updates, no edgers, xserver-xorg-video-intel from Git.
Please provide your Xorg.0.log so that I can be sure that you have intel-virtual-output from after commit a55bbe3b598616ef4464e50cb9364c8cdf0b513a Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Mar 14 15:47:20 2014 +0000 intel-virtual-output: Disable panning before setting mode on CRTC P.S. Thanks for filing a bug against -intel for an -nvidia bug! :-p
Well, I thought it' ivo behaving strangely with a newer nvidia driver, so... Sorry for mis-reporting. First, I forgot to say `git pull`. Now I find that ivo works for me up to and including "ecc20fb intel-virtual-output: Discard unwanted events from the mouse recorder", even on nvidia-304. After that I start getting issues when connecting a third display: Window contents are not copied properly to the second (!) display. How can I help finding what's going on?
In the Xorg.0.log it should say the sha of the git version you are running. Please attach the Xorg.0.log to confirm everything compiled and installed currently.
See log in https://bugs.freedesktop.org/show_bug.cgi?id=76269#c5 .
Created attachment 96276 [details] Xorg.0.log Still occurs even with xserver-xorg-video-intel from Git. Log attached.
Ok, that should have the intel-virtual-output with the panning tweaks for nvidia. Can you please paste the output of xrandr -d :0 and xrandr -d :8? Also recompiling tools/virtual.c with diff --git a/tools/virtual.c b/tools/virtual.c index 5883950..c5316a2 100644 --- a/tools/virtual.c +++ b/tools/virtual.c @@ -65,7 +65,7 @@ #include <fcntl.h> #include <assert.h> -#if 0 +#if 1 #define DBG(x) printf x #define EXTRA_DBG 1 #else and capturing the log with ./intel-virtual-output -f -b would be very useful. Indeed it would be sensible if you did check that i-v-o is being compiled. (At the bottom of the ./configure summary it should be listed amongst the tools to be installed.)
Oh, and be sure to remove the /usr/local/bin/intel-virtual-output that was installed earlier.
Created attachment 96344 [details] xrandr -d :0
Created attachment 96345 [details] xrandr -d :8
Created attachment 96346 [details] Output of DBG-enabled intel-virtual-output (run from source directory) The second (horizontally aligned) screen did not receive updates. I moved a window over the second screen to the third and back again, during that move it was invisible on the second screen.
Please note that the title of the bug is still misleading, but I can't seem to find how to change it to "In a tri-head configuration on a laptop, one of the external screens is not repainted".
Hmm. DP-1 never sees any damage, that is very strange. I've tweaked the DBG to look at what happens when we record the damage. Please can you update and grap an updated DBG log. Thanks.
commit a273b207b94933713b3dfd7edd3f6bb9b3e959b9 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Mar 25 09:40:40 2014 +0000 intel-virtual-output: Fix damage iteration over active list When iterating over the active list to mark the current damage, we need to chase the ->active pointer rather than ->next or else we walk the wrong list from the wrong starting point. Reported-by: Kirill Müller <mail@kirill-mueller.de> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76271 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Thanks for the quick fix. Confirmed on my production system that uses stock Intel drivers from Ubuntu. However, there is now a heavy lag, e.g. between key press and screen update when typing text. The lag is smaller but still visible on the primary laptop display.
Make sure you undo the DBG, then tell me what "top" and "perf top" look like. At this point, it is more than likely that the pixel transfer is significant. If it is the pixel transfer, you can try: diff --git a/tools/virtual.c b/tools/virtual.c index df92516..e0ecdae 100644 --- a/tools/virtual.c +++ b/tools/virtual.c @@ -74,6 +74,7 @@ #endif #define FORCE_FULL_REDRAW 0 +#define FORCE_16BIT_XFER 1 struct display { Display *dpy; @@ -1462,7 +1463,7 @@ static int clone_output_init(struct clone *clone, struct output *output, DBG(("%s-%s use shm? %d (use shm pixmap? %d)\n", DisplayString(dpy), name, display->has_shm, display->has_shm_pixmap)); - depth = output->use_shm ? display->depth : 16; + depth = output->use_shm && !FORCE_16BIT_XFER ? display->depth : 16; if (depth < clone->depth) clone->depth = depth;
Also setting Option "AllowSHMPixmaps" "true" for the :8 nvidia display may or may not help. I think it should, it just depends on the driver.
The DBG with any of these settings + lag, would be useful as well, as there may be a few clues (and I want to check that i-v-o responds appropriately to the differing settings).
top looks perfectly normal, Gnome Shell is at about 8%, below are the first few lines of perf top: 25.91% nvidia_drv.so [.] 0x00000000000935a0 5.51% [drm] [k] drm_clflush_page 3.29% [vdso] [.] 0x0000000000000422 2.74% intel_drv.so [.] 0x00000000000513c0 1.92% libv8.so [.] 0x0000000000242052 1.53% libglib-2.0.so.0.3800.1 [.] 0x000000000008986e (Note: I'm running a low-latency kernel. No idea if it matters.) The lag persists after turning on FORCE_16BIT_XFER. Where do I set Option "AllowSHMPixmaps" "true" ? Compilation with DBG=1 gives me plenty of errors in 3310ee89c1f1a.
(In reply to comment #18) > top looks perfectly normal, Gnome Shell is at about 8%, below are the first > few lines of perf top: > > 25.91% nvidia_drv.so [.] 0x00000000000935a0 > 5.51% [drm] [k] drm_clflush_page > 3.29% [vdso] [.] 0x0000000000000422 > 2.74% intel_drv.so [.] 0x00000000000513c0 > 1.92% libv8.so [.] 0x0000000000242052 > 1.53% libglib-2.0.so.0.3800.1 [.] 0x000000000008986e > > (Note: I'm running a low-latency kernel. No idea if it matters.) Eek, nvidia how could you. Just as disturbing is the drm_clflush_page, but that is likely to be due to DRI. > The lag persists after turning on FORCE_16BIT_XFER. > > Where do I set Option "AllowSHMPixmaps" "true" ? There should be a bumblebee config file for setting what options to pass to the nvidia server, I think it is /etc/bumblebee/xorg.conf.nvidia. Add the Option to Section "Device" > Compilation with DBG=1 gives me plenty of errors in 3310ee89c1f1a. Fixed.
Setting Option "AllowSHMPixmaps" "true" in /etc/bumblebee/xorg.conf.nvidia doesn't work for me: After rebooting, the remote displays get activated but stay blank when running intel-virtual-output, both bf1875139817c5 and ecc20fb (the "stable" version I'm still using in production).
Can you please attach a debug log from i-v-o for an AllowSHMPixmaps session so that I can check if it is simply not me doing something stupid?
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.