The following is a patch that adds a feature to the xorg intel driver: 24 bit pixel format support.
The application here is using a display that expects TMDS data in 24 bit format; the Intel driver seems to only use the 18 bit format. This patch optionally enables 24 bit pixel format. The change is trivial, and controlled by a new driver option (see further).
Attached to this report is a signed-off patch against the driver sources. I debugged this against a Debian-compiled variant of the driver (version 2:2.2.0-1), but I believe it should be close enough to cleanly apply against the upstream sources. Copied below are the patch comments:
xorg-video-intel: Implement support for 24 bit pixel format
From: Mike Isely <email@example.com>
The Intel driver appears to be coded to only work with displays
expecting 18 bit pixels. However I have an application using a LCD
display that expects pixel data in 24 bit format. The difference is
only 2 bits in a single GPU register. This patch implements that
change, controlled by a new driver option, "LVDS24Bit". The default
value is false, which is the previous behavior. When set to true,
then 24 bit panels should work (at least the one I'm testing here
does). A couple notes:
1. I noticed a comment in the source (right where I made the change)
suggesting the desire to support 24 bit pixels, but expressing some
concern about understanding how to do this correctly. I can attest to
this patch working for my situation, and it's done in a
non-interfering manner (i.e. controlled via driver option) so it
should be safe to apply.
2. One other thought I had was that perhaps rather than controlling
this with a driver option that instead we simply notice if the user is
asking for a 24 bit (or 32 bit) display depth. If the bit depth is
deep enough, then enable the 24 bit pixel format. However that really
isn't right. The display depth setting is really a user preference
not a setting that must match some esoteric hardware attribute. I
think to drive this setting magically from the specified bit depth is
just going to ask for a lot of confusion going forwards. Besides
there are good reasons for example to use a 16 bit frame buffer while
still needing the 24 bit pixel format (like performance). So the
pixel format really should be separate from the frame buffer display
3. It seems still that there should be a way to automatically control
this. I have no idea however what would be. I'm just a hack at this.
But I do strongly believe that at least implementing a manual control
is better than no control at all. Thus this patch.
Signed-off-by: Mike Isely <firstname.lastname@example.org>
Please attach patch.
Created attachment 15463 [details] [review]
patch for described feature (see patch commit comment)
I thought I had it attached before. No idea how I dropped this. Twice. Anyway, here's the patch.
Created attachment 15768 [details] [review]
Man page change to document the earlier-implemented LVDS24Bit option
(A corresponding man page patch for bug #15200 will be forthcoming.)