Attempting to set the performance level to 15 on my laptop results in errors. nouveau 0000:01:00.0: NVIDIA G96 (096500a1) nouveau 0000:01:00.0: bios: version 62.94.3c.00.15 nouveau 0000:01:00.0: fb: 512 MiB GDDR3 The kernel module has options: config=NvClkMode=15 That leads to errors: nouveau 0000:01:00.0: clk: failed to raise voltage: -22 ... nouveau 0000:01:00.0: clk: error setting pstate 3: -22 After that the contents of /sys/kernel/debug/dri/0/pstate are: 03: core 169 MHz shader 338 MHz memory 100 MHz 05: core 275 MHz shader 550 MHz memory 300 MHz 07: core 400 MHz shader 800 MHz memory 300 MHz 0f: core 625 MHz shader 1563 MHz memory 800 MHz AC DC * AC: core 275 MHz shader 550 MHz memory 799 MHz Kernel log and VBIOS will follow.
Created attachment 129840 [details] VBIOS from /sys/kernel/debug/dri/0/vbios.rom
This is probably apparent to anyone investigating, but for your benefit... what's happening is that memory reclock works fine, but changing core/shader clocks requires a voltage adjustment, and that fails. So your core/shader clocks remain unchanged, while memory gets reclocked. Without looking at your VBIOS, I suspect if you first clock to 07 and then to 0f, you'll be able to get 07's core/shader speeds and the faster memory clock.
Created attachment 129841 [details] dmesg
(In reply to Ilia Mirkin from comment #2) > Without looking at your VBIOS, I suspect if you first clock to 07 and then > to 0f, you'll be able to get 07's core/shader speeds and the faster memory > clock. That's a good idea, and it indeed seems to be working. I ended up with: 03: core 169 MHz shader 338 MHz memory 100 MHz 05: core 275 MHz shader 550 MHz memory 300 MHz 07: core 400 MHz shader 800 MHz memory 300 MHz 0f: core 625 MHz shader 1563 MHz memory 800 MHz AC DC * AC: core 400 MHz shader 800 MHz memory 799 MHz
Huh. Well, this is what's in the vbios: Voltage table at 0xd0c9. Version 32. Header: 0x0000d0c9: 20 06 02 04 00 01 mask = 0x1 2 entries -- Vid = 0x0, voltage = 890000 <C2><B5>V -- -- GPIO tag 0x4(VID) data (logic 0) -- 0x0000d0cf: 59 00 00 00 -- Vid = 0x1, voltage = 1050000 <C2><B5>V -- -- GPIO tag 0x4(VID) data (logic 1) -- 0x0000d0d3: 69 01 00 00 -- ID 0x3 Core 169MHz Memory 100MHz Shader 338MHz Vdec 169MHz Dom6 208MHz Voltage 89[*10mV] Timing 0 Fan 100 PCIe link width 1 -- -- ID 0x5 Core 275MHz Memory 300MHz Shader 550MHz Vdec 275MHz Dom6 277MHz Voltage 89[*10mV] Timing 16 Fan 100 PCIe link width 16 -- -- ID 0x7 Core 400MHz Memory 300MHz Shader 800MHz Vdec 450MHz Dom6 416MHz Voltage 89[*10mV] Timing 0 Fan 100 PCIe link width 16 -- -- ID 0xf Core 625MHz Memory 800MHz Shader 1563MHz Vdec 450MHz Dom6 416MHz Voltage 116[*10mV] Timing 0 Fan 100 PCIe link width 16 -- Note that the 89 voltage is clearly covered by voltage entry 0, while the 116 voltage is not available. It'd be instructional to see what the blob does. If you have any mmiotrace (even a very old one) for this board, it'd be interesting to look at.
Thanks, I'll have a look, but don't hold your breath.
Actually, I found out that there is an old dump from this laptop referred to in bug #60680: http://people.freedesktop.org/~pq/mmiotrace-fdo-bug-60680.tar.xz Does that help?
it would be quite unpleasant if nvidia would indeed use the highest clock level here. And that mmiotrace seems a bit odd, because it accesses a GPIO which shouldn't exist and in any case it doesn't change the voltage of that GPU. Would you mind checking what nvidia clocks to on this system?
(In reply to Karol Herbst from comment #8) > it would be quite unpleasant if nvidia would indeed use the highest clock > level here. Do you mean an noise or something else? > And that mmiotrace seems a bit odd, because it accesses a GPIO which > shouldn't exist and in any case it doesn't change the voltage of that GPU. Yeah, I guessed it might not be enough, since it was recorded for HDMI enablement. Thanks for taking a look. > Would you mind checking what nvidia clocks to on this system? Yeah, looks like I should be able to install the legacy drivers 340.x. I'll put that on my todo to record a new mmiotrace with some OpenGL action or something else to get the clocks to max.
(In reply to Pekka Paalanen from comment #9) > (In reply to Karol Herbst from comment #8) > > it would be quite unpleasant if nvidia would indeed use the highest clock > > level here. > > Do you mean an noise or something else? > hacky code > > Would you mind checking what nvidia clocks to on this system? > > Yeah, looks like I should be able to install the legacy drivers 340.x. I'll > put that on my todo to record a new mmiotrace with some OpenGL action or > something else to get the clocks to max. you can force the highest power state in nvidia-settings afaik, this way the mmiotrace stays rather small
but you can use OpenGL stuff to confirm if it doesn't use the higher clocks
Note to self: need to rebuild my kernel without CONFIG_TRIM_UNUSED_KSYMS. Otherwise the driver won't install.
Seems unlikely I will ever get around to make that mmiotrace. Should we close this report?
(In reply to Pekka Paalanen from comment #13) > Seems unlikely I will ever get around to make that mmiotrace. Should we > close this report? can you run the nvidia legacy driver and see what perf level is set?
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/driver/xf86-video-nouveau/issues/326.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.