Bug 31000

Summary: [rv700] powerplay efficiency
Product: xorg Reporter: János Illés <ijanos>
Component: Driver/RadeonAssignee: xf86-video-ati maintainers <xorg-driver-ati>
Status: RESOLVED MOVED QA Contact: Xorg Project Team <xorg-team>
Severity: minor    
Priority: medium CC: b.bellec, bugs.freedesktop, carsten, freedesktop, kinuris, vladimir, yimboka
Version: 7.5 (2009.10)   
Hardware: x86 (IA32)   
OS: Linux (All)   
Whiteboard:
i915 platform: i915 features:

Description János Illés 2010-10-20 04:20:15 UTC
Based on the RadeonFeature wiki page, the RV700 power management is complete. It is working great indeed, keeping my laptop with an RV710 card silent and cool. However it is still warmer by 5-7°C compared to the ATI binary driver. It also means around 50 minutes less when using the machine from battery.

Will this improve in the future and can I do something to help it improving?

I’m using the latest stable xf86-video-ati driver with latest stable Linux kernel.
Comment 1 Rafał Miłecki 2010-11-07 02:28:06 UTC
Do you use KMS? What commands did you execute do slow down your GPU?
Comment 2 János Illés 2010-11-07 02:31:43 UTC
Yes, I'm using KMS and this command:

 echo low > /sys/class/drm/card0/device/power_profile
Comment 3 Ernst Sjöstrand 2010-11-07 12:38:07 UTC
What does
cat /sys/kernel/debug/dri/0/radeon_pm_info
say?
IIRC it still doesn't reduce the number of PCIE lanes, does it?
Comment 4 János Illés 2010-11-07 14:20:35 UTC
With low profile:

 default engine clock: 500000 kHz
 current engine clock: 219370 kHz
 default memory clock: 800000 kHz
 current memory clock: 299250 kHz
 voltage: 900 mV
 PCIE lanes: 16

And with high profile:

 default engine clock: 500000 kHz
 current engine clock: 499500 kHz
 default memory clock: 800000 kHz
 current memory clock: 796500 kHz
 voltage: 1200 mV
 PCIE lanes: 16
Comment 5 Mike 2010-11-24 10:15:58 UTC
I'm getting 8-10°C more using last open source driver, compared to catalyst. And aprox. -25% less battery life. Using low profile here.

My values:

with low profile

default engine clock: 680000 kHz
current engine clock: 109680 kHz
default memory clock: 800000 kHz
current memory clock: 249750 kHz
voltage: 950 mV
PCIE lanes: 16

with high profile

default engine clock: 680000 kHz
current engine clock: 678370 kHz
default memory clock: 800000 kHz
current memory clock: 499500 kHz
voltage: 1200 mV
PCIE lanes: 16
Comment 6 yimm 2010-12-13 02:14:28 UTC
Same here, with 4850 mobility.

# echo high > /sys/class/drm/card0/device/power_profile
# cat /sys/kernel/debug/dri/0/radeon_pm_info
default engine clock: 500000 kHz
current engine clock: 500000 kHz
default memory clock: 850000 kHz
current memory clock: 850000 kHz
voltage: 1050 mV
PCIE lanes: 16

# echo low > /sys/class/drm/card0/device/power_profile
# cat /sys/kernel/debug/dri/0/radeon_pm_info
default engine clock: 500000 kHz
current engine clock: 300000 kHz
default memory clock: 850000 kHz
current memory clock: 250000 kHz
voltage: 1050 mV
PCIE lanes: 16

And the second problem is that the GPU Voltage does not work.
Comment 7 János Illés 2011-03-07 00:48:18 UTC
Power usage is still the same with 2.6.38 rc kernels.
Comment 8 carsten 2011-03-28 13:35:00 UTC
It's awful. That's still the reason for so many people not to move to the radeon driver. This should definitely be a priority.

2.6.38, RV710.

With profile low:
default engine clock: 500000 kHz
current engine clock: 219370 kHz
default memory clock: 800000 kHz
current memory clock: 299250 kHz
voltage: 900 mV
PCIE lanes: 16
Comment 9 Rafał Miłecki 2011-03-28 13:59:25 UTC
(In reply to comment #4)
> With low profile:
> 
>  default engine clock: 500000 kHz
>  current engine clock: 219370 kHz
>  default memory clock: 800000 kHz
>  current memory clock: 299250 kHz
>  voltage: 900 mV
>  PCIE lanes: 16

János Illés: in your case we downclock GPU and reduce voltage. Don't know what we really need to achieve Catalyst's level of efficiency. You may see http://www.botchco.com/agd5f/?p=45

Alex: can this be matter of slower pixel clock? Or what else?
Comment 10 János Illés 2011-03-29 03:04:06 UTC
>Don't know what we really need to achieve Catalyst's level of efficiency.

I think PCI Lane adjusment is the only one missing.
Comment 11 Alex Deucher 2011-03-29 07:25:04 UTC
(In reply to comment #10)
> 
> I think PCI Lane adjusment is the only one missing.

The code to adjust pcie lanes is already available, but no r6xx+ asics specify lanes less than 16 in practice.  The low profile should be pretty close to the closed driver at idle.  I've been digging into the power management stuff in more depth recently so hopefully we should have some improvements in the not too distant future.
Comment 12 Roland Scheidegger 2011-03-29 09:49:43 UTC
(In reply to comment #10)
> >Don't know what we really need to achieve Catalyst's level of efficiency.
> 
> I think PCI Lane adjusment is the only one missing.

With the quoted reduced runtime, I doubt that would make that much difference.
Sounds more like an issue with clock gating? Maybe the driver does something which prevents shutting down all blocks it could?
dynpm though doesn't really work, at least in my case I did not see any downclock, not even once. But when you force low profile that apparently shouldn't matter (not that this works too well neither, even with one display I've never seen this happen without flicker).
Comment 13 Vladimir Lushnikov 2011-04-17 15:47:16 UTC
What is preventing dynpm from being used on multihead displays? Is it still the flickering bug explained in the Radeon features page?
Comment 14 Alex Deucher 2011-04-18 07:58:46 UTC
(In reply to comment #13)
> What is preventing dynpm from being used on multihead displays? Is it still the
> flickering bug explained in the Radeon features page?

The clocks need to be changed in the vertical blanking period of the display.  It's almost impossible to get the blanking periods to line up across multiple displays.
Comment 15 carsten 2011-05-14 11:13:07 UTC
Any progress?
Comment 16 Martin Peres 2019-11-19 07:30:27 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/driver/xf86-video-ati/issues/13.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.