Bug 21489 (Low_colordepth_fail)

Summary: Color depths less than 24-bit result in useless malformed screen output
Product: xorg Reporter: Joacim <joacim>
Component: Server/GeneralAssignee: Xorg Project Team <xorg-team>
Status: RESOLVED FIXED QA Contact: Xorg Project Team <xorg-team>
Severity: normal    
Priority: medium    
Version: 7.4 (2008.09)   
Hardware: x86 (IA32)   
OS: Linux (All)   
Whiteboard:
i915 platform: i915 features:
Attachments:
Description Flags
xorg.conf for 8-bit test
none
xorg.conf for 16-bit test
none
xorg.conf for 24-bit test
none
Xorg.0.log for 8-bit test
none
Xorg.0.log for 16-bit test
none
Xorg.0.log for 24-bit test none

Description Joacim 2009-04-30 04:13:39 UTC
Created attachment 25297 [details]
xorg.conf for 8-bit test 

I'm testing Fedora 11 Preview with the xorg-x11-drv-ati-6.12.2-4.fc11.rpm.

Hardware: Thinkpad T43 with ATI Radeon M22 (Mobility Radeon X300).

Running default Fedora installation with no xorg.conf and 24-bit color depth gives a very nice and fully working screen output from Xorg.

Test with different combinations of depth and xorg.conf:

24-bit & no xorg.conf (Fedora default) 
 => Fully working.
24-bit & my xorg.conf 
 => Fully working.
16-bit & my xorg.conf 
 => Greenish (color palette seems shifted) with 4 Xorg-root windows skewed and lower part of screen flickering sometimes and seems to show old screen outputs?
8-bit & my xorg.conf 
 => Black Xorg-root window and only mouse cursor visible. But it seems to hide there behind the blackness since the mouser cursor changes appearance where the terminal should be (running twm).

The only difference between "my xorg.conf" in the three tests are the DefaultDepth value.

My system worked well on Fedora 9 (don't remember the ati driver version).

My guess is that code changes to the ati driver due to change in acceleration methods and maybe VT switching are the source to this bug. If so is the code for handling color depths less than 24-bit also being worked on? I saw no signs of 8-bit or 16-bit color depth mentioned in the Changelog. If this kind of code priority has happende, please let me know.

I'll do these tests again on a clean Fedora 11 Preview installation without our software installed to remove the possibility for some obscure self made environment catalyzing this bug. :-)
Comment 1 Joacim 2009-04-30 04:14:11 UTC
Created attachment 25298 [details]
xorg.conf for 16-bit test
Comment 2 Joacim 2009-04-30 04:14:37 UTC
Created attachment 25299 [details]
xorg.conf for 24-bit test
Comment 3 Joacim 2009-04-30 04:15:21 UTC
Created attachment 25300 [details]
Xorg.0.log for 8-bit test
Comment 4 Joacim 2009-04-30 04:15:37 UTC
Created attachment 25301 [details] [review]
Xorg.0.log for 16-bit test
Comment 5 Joacim 2009-04-30 04:15:55 UTC
Created attachment 25302 [details]
Xorg.0.log for 24-bit test
Comment 6 Alex Deucher 2009-04-30 07:20:25 UTC
I pretty sure this is an xserver bug as the same ati driver works fine with xserver 1.5 at 16 bpp, but does not with xserver 1.6.
Comment 7 Pauli 2009-09-21 13:48:55 UTC
To me it looks like scanout is reading 32bit values while vram only holds 16bit information.

No idea what would cause it.
Comment 8 Michel Dänzer 2009-09-21 14:33:19 UTC
If this only happens with kernel modesetting, it's probably a kernel bug which is fixed in the drm-next tree for 2.6.32. At least for 16bpp, 8bpp still isn't quite there (but isn't really interesting for X anyway).
Comment 9 Adam Jackson 2018-06-11 20:24:35 UTC
(In reply to Michel Dänzer from comment #8)
> If this only happens with kernel modesetting, it's probably a kernel bug
> which is fixed in the drm-next tree for 2.6.32. At least for 16bpp, 8bpp
> still isn't quite there (but isn't really interesting for X anyway).

2.6.32 was quite along time ago, closing.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.