Summary: | [NV44] dual-link tmds no longer allowed | ||
---|---|---|---|
Product: | xorg | Reporter: | monnier |
Component: | Driver/nouveau | Assignee: | Nouveau Project <nouveau> |
Status: | RESOLVED MOVED | QA Contact: | Xorg Project Team <xorg-team> |
Severity: | normal | ||
Priority: | medium | CC: | currojerez, hramrach, mauromol, ossi |
Version: | unspecified | ||
Hardware: | x86 (IA32) | ||
OS: | Linux (All) | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: |
Created attachment 51020 [details]
Successful boot with 2.6.32-5
Created attachment 51021 [details]
Half-successful boot with 3.0.0-1
2.6.32 kernel allowed to use higher resolution because it didn't take into account hardware limitations - NV44's tmds maximum pixel clock is 155MHz and 1600x1200 needs 162MHz. Your card is special if it worked before this 2.6.37 commit: http://cgit.freedesktop.org/nouveau/linux-2.6/commit/?id=1f5bd44354c878cf8bb0e28a7cb27677e3640c45 > 2.6.32 kernel allowed to use higher resolution because it didn't take into
> account hardware limitations - NV44's tmds maximum pixel clock is 155MHz and
> 1600x1200 needs 162MHz.
Hmm... I must admit I had no idea of such a limitation. And I've had this machine for 5 years or so, now. I've always used it at 1600x1200. At the beginning, I used it at that resolution via VGA rather than DVI (not sure if that tmds limit applies to VGA), but the nvidia driver was also happy to set it to 1600x1200 over DVI (tho the nv driver always limited itself to 1280x1024 over DVI, which is why I used VGA until nouveau matured). And ever since nouveau's kernel module started to work, I've used it for 1600x1200 over DVI.
I must say I'm shocked that a graphics card of the 21st century would not be able to go up to 1600x1200. Hell, my old Matrox Millenium II went up to 1800x1400 without complaining, more than 15 years ago. Are you *really* sure of this 155 limit?
So I guess I have 2 options:
- find a way to convince nouveau to go past this limit. Could we get an option
to get back the pre-2.6.37 behavior of limiting the tmds to 165?
- find a way to tell nouveau to use a modeline which gives me 1600x1200 but
without the annoying "non optimal mode" warning on my monitor.
Is there a way to get finer control on the modeline than "video=NNNxMMM"?
BTW, the 3.0.0-1 log shows that providing a "video=1600x1200" argument convinces nouveau to use 1600x1200 and while it doesn't use the native modeline (at 162Mhz) it does use a modeline at more than 155MHz:
[ 7.316860] [drm:drm_mode_debug_printmodeline], Modeline 48:"1600x1200" 0 160961 1600 1704 1880 2160 1200 1201 1204 1242 0x0 0x6
Not sure where that modeline comes from, nor why nouveau seems to ignore the 155MHz limit for it.
This limit applies only to single-link DVI-D. Any other connector or dual-link DVI-D have much higher pixel clocks. We could add an option to override this, but let's check other possibilities first. 1) Can you experiment a bit with "video" option? There is some documentation at: http://nouveau.freedesktop.org/wiki/KernelModeSetting#ForcingModes 2) Please attach Xorg.log from nvidia driver. I would like to see pixel clock used. (Adding Francisco to CC, he knows much more about modesetting than me) (In reply to comment #4) > > 2.6.32 kernel allowed to use higher resolution because it didn't take into > > account hardware limitations - NV44's tmds maximum pixel clock is 155MHz and > > 1600x1200 needs 162MHz. > > Hmm... I must admit I had no idea of such a limitation. And I've had this > machine for 5 years or so, now. I've always used it at 1600x1200. At the > beginning, I used it at that resolution via VGA rather than DVI (not sure if > that tmds limit applies to VGA), No, as Marcin said that limit only applies to your TMDS encoder. > but the nvidia driver was also happy to set it to 1600x1200 over DVI The nvidia binary driver probably doesn't use your monitor's native mode either, but rather a reduced-blanking mode of the same resolution. To make nouveau do the same for the kernel framebuffer use something like "video=DVI-D-1:1600x1200MR". > (tho the nv driver always limited itself to 1280x1024 > over DVI, which is why I used VGA until nouveau matured). And ever since > nouveau's kernel module started to work, I've used it for 1600x1200 over DVI. > > I must say I'm shocked that a graphics card of the 21st century would not be > able to go up to 1600x1200. Hell, my old Matrox Millenium II went up to > 1800x1400 without complaining, more than 15 years ago. Are you *really* sure > of this 155 limit? > Pretty much. If you want to be 100% sure just try to get the nvidia blob to set a mode over the limit. > So I guess I have 2 options: > - find a way to convince nouveau to go past this limit. Could we get an option > to get back the pre-2.6.37 behavior of limiting the tmds to 165? > - find a way to tell nouveau to use a modeline which gives me 1600x1200 but > without the annoying "non optimal mode" warning on my monitor. > Is there a way to get finer control on the modeline than "video=NNNxMMM"? > > BTW, the 3.0.0-1 log shows that providing a "video=1600x1200" argument > convinces nouveau to use 1600x1200 and while it doesn't use the native modeline > (at 162Mhz) it does use a modeline at more than 155MHz: > > [ 7.316860] [drm:drm_mode_debug_printmodeline], Modeline 48:"1600x1200" 0 > 160961 1600 1704 1880 2160 1200 1201 1204 1242 0x0 0x6 > > Not sure where that modeline comes from, nor why nouveau seems to ignore the > 155MHz limit for it. The fact that it's ignored is probably a bug of the common KMS layer. So, I tried to play with the video= parameter and I now have an acceptable workaround with "video=1600x1200@55" which gives me the right resolution without any complaint from my monitor. I tried to "M" and the "R" thingies as in "1600x1200MR" or "1600x1200R" but these seem to be ignored, or at least didn't make much difference (actually the "R" does the same as when nothing is specified and the "MR" gives a marginally higher pixel clock). I expected the "R" to give a much more substantial reduction in dotclock, or maybe the R is somehow ignored in the "MR" combination ("cvt" gives me the same modeline as I got with 1600x1200MR, at 161MHz, whereas "cvs --reduced" drops the dotclock to 130MHz). I haven't tried the Xorg nouveau driver on top to see whether it obeys my "video=" arg or whether I'm going to have to play the same dance in the xorg.conf file (because right now the Xorg driver gives me the familiar "error opening the drm"). But at least using the fbdev driver, this gives me a good workaround. I see only two ways to improve the situation (modulo the handling of "R" mentioned above): - provide a way to override the 155MHz limit (apparently, the hardware does not prevent overclocking and at least in my case, it handles such overclocking without blinking). - when the monitor's preferred modeline can't be used because of dotclock limits, output a clear message in the dmesg about it, and cook up a different modeline that preserves the resolution (e.g. at the cost of lower refresh rate). This is because on LCD displays, using the native resolution is a lot more important than using a high enough refresh rate. Created attachment 51034 [details]
Half-successful boot with "video=1600x1200MR" with Debian's 3.0.0-1
Created attachment 51035 [details]
Half-successful boot with "video=1600x1200R" with Debian's 3.0.0-1
(In reply to comment #7) > So, I tried to play with the video= parameter and I now have an acceptable > workaround with "video=1600x1200@55" which gives me the right resolution > without any complaint from my monitor. > I tried to "M" and the "R" thingies as in "1600x1200MR" or "1600x1200R" but > these seem to be ignored, or at least didn't make much difference (actually the > "R" does the same as when nothing is specified and the "MR" gives a marginally > higher pixel clock). I expected the "R" to give a much more substantial > reduction in dotclock, or maybe the R is somehow ignored in the "MR" > combination ("cvt" gives me the same modeline as I got with 1600x1200MR, at > 161MHz, whereas "cvs --reduced" drops the dotclock to 130MHz). > That's a bug of the kernel command line parser... try "RM" instead of "MR". > I haven't tried the Xorg nouveau driver on top to see whether it obeys my > "video=" arg or whether I'm going to have to play the same dance in the > xorg.conf file (because right now the Xorg driver gives me the familiar "error > opening the drm"). But at least using the fbdev driver, this gives me a good > workaround. > It doesn't, right now you need to tell X to use a reduced blanking mode separately. > I see only two ways to improve the situation (modulo the handling of "R" > mentioned above): > - provide a way to override the 155MHz limit (apparently, the hardware does not > prevent overclocking and at least in my case, it handles such overclocking > without blinking). > - when the monitor's preferred modeline can't be used because of dotclock > limits, output a clear message in the dmesg about it, and cook up a different > modeline that preserves the resolution (e.g. at the cost of lower refresh > rate). This is because on LCD displays, using the native resolution is a lot > more important than using a high enough refresh rate. The latter would be a better idea IMHO. Created attachment 52336 [details]
dmesg log for X start failure
Created attachment 52337 [details]
Xorg.log for X start failure
(In reply to comment #10) > That's a bug of the kernel command line parser... try "RM" instead of "MR". That does the trick, thank you (tho I still end up having to use the @55 because the modeline used by 1600x1200RM is still flagged by my monitor as "non optimal"). > > I haven't tried the Xorg nouveau driver on top to see whether it obeys my > > "video=" arg or whether I'm going to have to play the same dance in the > > xorg.conf file (because right now the Xorg driver gives me the familiar > It doesn't, right now you need to tell X to use a reduced blanking mode > separately. OK, since the Xorg nouveau driver now lets me start, I could try it out, and indeed it gives me 1280x960 by default. Problem is, if I do xrandr --newmode "1600x1200" 119.0 1600 1648 1680 1760 1200 1203 1207 1232 xrandr --addmode "DVI-D-1" "1600x1200" xrandr --output "DVI-D-1" --mode "1600x1200" I just get an X protocol error on the xrandr side and the following message on the xinit output: resize called 1600 1200 (EE) NOUVEAU(0): Couldn't allocate shadow memory for rotated CRTC And if I try to add the modeline directly to the xorg.conf: Section "Device" Identifier "NVidia card" Driver "nouveau" Option "Monitor-DVI-D-1" "Samsung 214T" EndSection Section "Monitor" Identifier "Samsung 214T" ModeLine "1600x1200RM@55" 119.0 1600 1648 1680 1760 1200 1203 1207 1232 EndSection the X server fails to start, with "Error allocating scanout buffer: 0" (commenting out the Modeline lets it start again, using 1280x960). I've attached the corresponding section of dmesg.log as well as the Xorg.log. > The latter would be a better idea IMHO. In any case, I hope that the problem generates a clear enough warning to help track it down. Ping? It appears that this bug report has laid dormant for quite a while. Sorry we haven't gotten to it. Since we fix bugs all the time, chances are pretty good that your issue has been fixed with the latest software. Please give it a shot. (Linux kernel 3.10.7, xf86-video-nouveau 1.0.9, mesa 9.1.6, or their git versions.) If upgrading to the latest isn't an option for you, your distro's bugzilla is probably the right destination for your bug report. In an effort to clean up our bug list, we're pre-emptively closing all bugs that haven't seen updates since 2011. If the original issue remains, please make sure to provide fresh info, see http://nouveau.freedesktop.org/wiki/Bugs/ for what we need to see, and re-open this one. Thanks, The Nouveau Team the bug persists without change. it's easy enough to fix the console with video=DVI-I-1:1600x1200, but Xorg just won't cooperate (apparently, it even simply ignores a Modeline). Created attachment 92765 [details]
another Xorg.0.log with bogus resolution
note that after the VT switch, there is suddely a "Printing DDC gathered Modelines" section, which indicates that some 1600x1200 mode was at leaast found.
I don't think there's any issue detecting the 1600x1200 mode. All your earlier logs appear to indicate it's detected correctly. The issue is that your card is not being detected as dual-link-capable, and the 1600x1200 mode without reduced blanking won't work on it. I have a NV44 and NV42 card, and neither have had trouble getting 1920x1200 going. However an NV34 card can't go beyond 1600x1200 on digital (TMDS). Could you upload a copy of your vbios (/sys/kernel/debug/dri/0/vbios.rom)? Also can you try hacking up your kernel, see nouveau_connector.c:nouveau_connector_mode_valid if (nouveau_duallink && nv_encoder->dcb->duallink_possible) max_clock *= 2; Just get rid of the if statement so that the max_clock *= 2 is always done. Perhaps the nv_device(drm->device)->chipset > 0x46 is wrong in get_tmds_link_bandwidth and it should actually be >= 0x44... although that feels more like a difference between geforce 6- and 7-series. (But then it picks up the igp's as well as the nv4a which is supposed to be just like the nv44...) I might be misremembering about 1600x1200 working on my NV34 actually... maybe it was 1680x1050 that worked. I just flipped it to VGA, which could handle 1920x1200 just fine. Here are my modes: 1920x1200 (0x64) 154.0MHz +HSync -VSync *current +preferred h: width 1920 start 1968 end 2000 total 2080 skew 0 clock 74.0KHz v: height 1200 start 1203 end 1209 total 1235 clock 60.0Hz 1600x1200 (0x65) 162.0MHz +HSync +VSync h: width 1600 start 1664 end 1856 total 2160 skew 0 clock 75.0KHz v: height 1200 start 1201 end 1204 total 1250 clock 60.0Hz 1680x1050 (0x66) 119.0MHz +HSync -VSync h: width 1680 start 1728 end 1760 total 1840 skew 0 clock 64.7KHz v: height 1050 start 1053 end 1059 total 1080 clock 59.9Hz Note how 1920x1200 is under the 155MHz limit of NV4x single-link TMDS chips. (And the nv34 has a 135MHz limit, so it can't do 1920x1200 either.) If this worked on the blob, I wonder if it was really using TMDS for outputting the 1600x1200 resolution, and not using the VGA (which should be able to handle it just fine). Still would be interesting to see what happens if you just make it claim that you have dual-link -- could well be that the nouveau detection of it is incorrect, or perhaps the TMDS on your card can do 165MHz and not 155MHz. huh? i said that it is *easy* to convince the kernel to pick the right mode. it's picking a wrong one only automatically. there is no hardware limitation at all here. X appears more tenacious, apparently because it filters out the good mode earlier in the process. it's beyond me why it does that - it makes no sense at all, and the log doesn't indicate anything afaics. this also puts a weird twist on the meaning of _K_MS ... also note that i'm a different person than the OP. i just happen to have pretty much the same configuration (card and monitor) and consequently the same problem. (In reply to comment #20) > huh? > i said that it is *easy* to convince the kernel to pick the right mode. it's > picking a wrong one only automatically. Really? It sounds to me like you managed to fool the drm mode-verification logic (or nouveau's is implemented incorrectly). If it was merely picking the wrong one, the 1600x1200 one would be in the mode list, and it is not. > there is no hardware limitation at all here. Maybe, maybe not. Should be fairly easy to check. One way to do that would be to try the thing I said. Or to change the 165mhz check to be >= 0x44 instead of 0x46. > X appears more tenacious, apparently because it filters out the good mode > earlier in the process. it's beyond me why it does that - it makes no sense It just gets the list from the kernel. What's in /sys/class/drm/card0-DVI-I-1/modes (or whatever the connector is)? The thing you (or someone else) saw with X seeing the 1600x1200 mode was it reading it from DDC. But I'm pretty sure it doesn't care about that -- X doesn't handle modesetting, the kernel does, and it uses its mode list. many reboots later: - yes, setting the resolution via the kernel command line clearly sets a mode the kernel considers non-existent in another place - i'm pretty sure it was using the digital channel of the dvi connector - i think sysfs said DVI-D, the monitor said it (dunno what it would say if it got analog on the dvi port), and it didn't do the usual geometry adjustment when analog input is used - yes, hacking away the duallink check fixes it - i have no clue how to compile the kernel / which switch to set to get the vbios.rom to pop up in the tree unrelated to the picked resolution, nouveau often locks up the gpu. but that's for another report ... This RHEL bug report is related: https://bugzilla.redhat.com/show_bug.cgi?id=681257 The NVIDIA binary driver seems to use reduced-blanking DVI modes by default on this chipset which allows for 1600x1200 resolution to be usable with single-link DVI. Nouveau does not do this, which leaves you stuck with a lower resolution. The spec sheets for the card in question there (Quadro NVS 285) list that it supports 1900x1200x60Hz on DVI. The binary driver reports 155 MHz as the maximum TMDS pixel clock. -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/driver/xf86-video-nouveau/issues/21. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
Created attachment 51019 [details] Problematic boot with vanilla 3.0.4 Up until a while ago, the nouveau kernel module allowed me to move my 1600x1200 monitor from the VGA port to the DVI port of my card (before that I used the `nv' X driver which wasn't able to setup the DVI output with the needed resolution). So all was well. But starting a while ago (not sure when), the nouveau driver started to fail to use my monitor's native resolution (it uses 1280x960 rather than 1600x1200). I'm using Debian built kernels usually and saw the difference somewhere between Debian's 2.6.32-5 and 2.6.39-2. To make sure the problem is not on Debian's side, I compiled my own vanilla 3.0.4 kernel (compiled with the same .config as Debian's), which showed the same problem. Debian maintainers told me (see debian bug#631582) to send you the dmesg log with drm.debug=6, so here are three such boot logs: the successful one with Debian's 2.6.32-5, the unsuccessful one with vanilla 3.0.4, and finally the half-successful one with Debian's 3.0.0-1 combined with a "video=1600x1200" argument, which gives me the right resolution (tho with a warning from my monitor that I'm using a "non optimal" mode, and my monitor (samsung 214t) seems to think I'm using a 1920x1200 mode for some odd reason).