Based on the RadeonFeature wiki page, the RV700 power management is complete. It is working great indeed, keeping my laptop with an RV710 card silent and cool. However it is still warmer by 5-7°C compared to the ATI binary driver. It also means around 50 minutes less when using the machine from battery. Will this improve in the future and can I do something to help it improving? I’m using the latest stable xf86-video-ati driver with latest stable Linux kernel.
Do you use KMS? What commands did you execute do slow down your GPU?
Yes, I'm using KMS and this command: echo low > /sys/class/drm/card0/device/power_profile
What does cat /sys/kernel/debug/dri/0/radeon_pm_info say? IIRC it still doesn't reduce the number of PCIE lanes, does it?
With low profile: default engine clock: 500000 kHz current engine clock: 219370 kHz default memory clock: 800000 kHz current memory clock: 299250 kHz voltage: 900 mV PCIE lanes: 16 And with high profile: default engine clock: 500000 kHz current engine clock: 499500 kHz default memory clock: 800000 kHz current memory clock: 796500 kHz voltage: 1200 mV PCIE lanes: 16
I'm getting 8-10°C more using last open source driver, compared to catalyst. And aprox. -25% less battery life. Using low profile here. My values: with low profile default engine clock: 680000 kHz current engine clock: 109680 kHz default memory clock: 800000 kHz current memory clock: 249750 kHz voltage: 950 mV PCIE lanes: 16 with high profile default engine clock: 680000 kHz current engine clock: 678370 kHz default memory clock: 800000 kHz current memory clock: 499500 kHz voltage: 1200 mV PCIE lanes: 16
Same here, with 4850 mobility. # echo high > /sys/class/drm/card0/device/power_profile # cat /sys/kernel/debug/dri/0/radeon_pm_info default engine clock: 500000 kHz current engine clock: 500000 kHz default memory clock: 850000 kHz current memory clock: 850000 kHz voltage: 1050 mV PCIE lanes: 16 # echo low > /sys/class/drm/card0/device/power_profile # cat /sys/kernel/debug/dri/0/radeon_pm_info default engine clock: 500000 kHz current engine clock: 300000 kHz default memory clock: 850000 kHz current memory clock: 250000 kHz voltage: 1050 mV PCIE lanes: 16 And the second problem is that the GPU Voltage does not work.
Power usage is still the same with 2.6.38 rc kernels.
It's awful. That's still the reason for so many people not to move to the radeon driver. This should definitely be a priority. 2.6.38, RV710. With profile low: default engine clock: 500000 kHz current engine clock: 219370 kHz default memory clock: 800000 kHz current memory clock: 299250 kHz voltage: 900 mV PCIE lanes: 16
(In reply to comment #4) > With low profile: > > default engine clock: 500000 kHz > current engine clock: 219370 kHz > default memory clock: 800000 kHz > current memory clock: 299250 kHz > voltage: 900 mV > PCIE lanes: 16 János Illés: in your case we downclock GPU and reduce voltage. Don't know what we really need to achieve Catalyst's level of efficiency. You may see http://www.botchco.com/agd5f/?p=45 Alex: can this be matter of slower pixel clock? Or what else?
>Don't know what we really need to achieve Catalyst's level of efficiency. I think PCI Lane adjusment is the only one missing.
(In reply to comment #10) > > I think PCI Lane adjusment is the only one missing. The code to adjust pcie lanes is already available, but no r6xx+ asics specify lanes less than 16 in practice. The low profile should be pretty close to the closed driver at idle. I've been digging into the power management stuff in more depth recently so hopefully we should have some improvements in the not too distant future.
(In reply to comment #10) > >Don't know what we really need to achieve Catalyst's level of efficiency. > > I think PCI Lane adjusment is the only one missing. With the quoted reduced runtime, I doubt that would make that much difference. Sounds more like an issue with clock gating? Maybe the driver does something which prevents shutting down all blocks it could? dynpm though doesn't really work, at least in my case I did not see any downclock, not even once. But when you force low profile that apparently shouldn't matter (not that this works too well neither, even with one display I've never seen this happen without flicker).
What is preventing dynpm from being used on multihead displays? Is it still the flickering bug explained in the Radeon features page?
(In reply to comment #13) > What is preventing dynpm from being used on multihead displays? Is it still the > flickering bug explained in the Radeon features page? The clocks need to be changed in the vertical blanking period of the display. It's almost impossible to get the blanking periods to line up across multiple displays.
Any progress?
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/driver/xf86-video-ati/issues/13.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.