Bug 75629 - [r128] Changes needed for RandR support
Summary: [r128] Changes needed for RandR support
Status: RESOLVED FIXED
Alias: None
Product: xorg
Classification: Unclassified
Component: Driver/rage128 (show other bugs)
Version: git
Hardware: All All
: medium enhancement
Assignee: Connor Behan
QA Contact: Xorg Project Team
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-02-28 22:40 UTC by Connor Behan
Modified: 2014-07-09 18:50 UTC (History)
1 user (show)

See Also:
i915 platform:
i915 features:


Attachments
Unfinished patch (53.41 KB, patch)
2014-02-28 22:53 UTC, Connor Behan
no flags Details | Splinter Review
Untested patch (85.29 KB, patch)
2014-04-09 07:24 UTC, Connor Behan
no flags Details | Splinter Review
Somewhat tested patch (89.65 KB, patch)
2014-04-15 19:02 UTC, Connor Behan
no flags Details | Splinter Review
Single head patch (96.70 KB, patch)
2014-04-30 07:19 UTC, Connor Behan
no flags Details | Splinter Review
Dual head patch (101.57 KB, patch)
2014-05-05 06:06 UTC, Connor Behan
no flags Details | Splinter Review
Failed attempt at DDC (6.07 KB, patch)
2014-05-14 04:14 UTC, Connor Behan
no flags Details | Splinter Review
Log with crash (12.59 KB, text/plain)
2014-05-17 00:28 UTC, Connor Behan
no flags Details
Updated patch (102.49 KB, patch)
2014-05-24 08:51 UTC, Connor Behan
no flags Details | Splinter Review
Patch with detection (102.51 KB, patch)
2014-05-25 19:54 UTC, Connor Behan
no flags Details | Splinter Review
Patch with better detection (102.99 KB, patch)
2014-05-28 08:58 UTC, Connor Behan
no flags Details | Splinter Review
Patch 05-28 (103.08 KB, patch)
2014-05-28 23:50 UTC, Connor Behan
no flags Details | Splinter Review

Description Connor Behan 2014-02-28 22:40:09 UTC
For the past week, I have been trying to add RandR support to r128. The main challenge for this is changing the UMS code to use the Outputs and Crtcs introduced with xorg-server-1.3. Moving to a well tested API would surely fix some open bugs for r128.

My main reference has been http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/commit/?id=3cfe94d5438961b869766dacbcd13fde8b770ca2 and Alex's r128-support branch.
Comment 1 Connor Behan 2014-02-28 22:53:16 UTC
Created attachment 94923 [details] [review]
Unfinished patch

This is what I have so far. It won't compile because I haven't bothered to edit Makefiles or headers yet. More importantly, I have not implemented R128SetupConnectors() or r128_crtc_mode_set(). This will take more work because radeon has gone through a lot of changes with PortInfo and BIOSConnector and so on.

There may be problems already in places where I have written XXX. For example, monitors can only be detected with DDC + EDID. This is better than the old code which just guessed but it would be nice to add support for non-DDC detection like what radeon has.
Comment 2 Connor Behan 2014-04-09 07:24:21 UTC
Created attachment 97110 [details] [review]
Untested patch

The changes now compile, but testing will take some time.
Comment 3 Connor Behan 2014-04-15 19:02:23 UTC
Created attachment 97421 [details] [review]
Somewhat tested patch

There are a few changes I had to make to start the server.

1. I moved xf86SetDesiredModes() closer to the end of the ScreenInit function.

2. I hardcoded MT_LCD as the monitor type if the chip does not have DFP. This is because any attempt to read the DDCReg caused a crash. I am guessing that this register only works for cards that have DVI ports? The registers that radeon has for using I2C buses on other types of connectors don't seem to exist in r128.

3. For reasons that I will figure out later, the driver also tried to do an EDID / DDC2 detection for the VGA connector so I temporarily set BiosConnector[1].valid = FALSE.

4. Several funcs that radeon didn't have in 2007 have since become mandatory. I added prepare, commit on the output side and prepare, commit, shadow_allocate, shadow_create, shadow_destroy on the crtc side.

After X has started, I notice the following issues.

1. I have to set SWcursor in xorg.conf because I have not yet added the cursor crtc funcs.

2. Acceleration has to be enabled in xorg.conf or else X will crash. I'm sure this is easily fixed.

3. Despite acceleration being enabled, DRI fails to initialize due to lack of VRAM. I need to figure out how to make this API conserve as much memory as the old one.

4. There is an RMX issue giving me a big black bar at the bottom of the screen. The cursor cannot enter it and I cannot take screenshots of it, it is just there. This is how Xv used to look before bug 6493 was fixed. Speaking of that, there is a lot of dead space when I try Xv.

5. Also related to RMX, the right amount of stretching is not restored when switching back to a VT. If I want a stretched X server and an unstretched console, the console will be stretched if I start and stop X. This can be reset manually from the BIOS.
Comment 4 Connor Behan 2014-04-30 07:19:46 UTC
Created attachment 98214 [details] [review]
Single head patch

This patch implements the hardware cursor and fixes some of the problems in the last one. The server crash with no accel was a pre-existing problem which I have just fixed in git. DRI was running out of memory because the list of possible modes includes large resolutions when get_modes returns NULL. What I referred to as an RMX issue was actually a double scan issue.

I will work on the dualhead aspect before I go chasing down people's opinions, but I have two main questions so far.

1. Is there really no way to detect the monitor type for cards without DVI?

2. Rotating with xrandr doesn't work, even when I set canGrow to FALSE and start the server with EXA on and DRI off. The edge of the picture gets cut off and the cursor changes directions but most of what I see on the screen doesn't rotate. Do drivers need explicit support for this?
Comment 5 Connor Behan 2014-05-05 06:06:15 UTC
Created attachment 98445 [details] [review]
Dual head patch

The dualhead things I was trying all work with this patch. Sometimes in order to rotate, I need to switch modes and then switch back. I also suspect that single monitor users will see increased power consumption because the driver will think a second monitor is connected. However, both of these problems seem minor.

No one else has commented yet, so I will take discussion to the list.
Comment 6 Alex Deucher 2014-05-05 13:43:53 UTC
FWIW, I ported r128 to radeon at one point.  Might find some useful bit there:
http://cgit.freedesktop.org/~agd5f/xf86-video-ati/log/?h=r128-support
Comment 7 Alex Deucher 2014-05-05 13:51:48 UTC
As for monitor detection, you need to sort out how the ddc lines are wired up to the connectors.  IIRC, r128 only implemented ddc support for DVI, so for VGA, you'd need to figure out what ddc line (basically just a gpio pad) it uses and what bits are used for read and write.  It should be pretty similar to radeon.  If you can't figure it out from trial and error, you can try and snoop what the vbios is doing when you issue a vbe ddc command.
Comment 8 Connor Behan 2014-05-14 04:09:28 UTC
(In reply to comment #7)
> As for monitor detection, you need to sort out how the ddc lines are wired
> up to the connectors.  IIRC, r128 only implemented ddc support for DVI, so
> for VGA, you'd need to figure out what ddc line (basically just a gpio pad)
> it uses and what bits are used for read and write.  It should be pretty
> similar to radeon.  If you can't figure it out from trial and error, you can
> try and snoop what the vbios is doing when you issue a vbe ddc command.

If r128 cards use a gpio pad for VGA, doesn't that mean they do implement ddc?

I looket at which register is used by radeon. It appears to be RADEON_GPIO_VGA_DDC at offset 0x0060. And this is in a part of the code that specifically refers to r128: http://cgit.freedesktop.org/~agd5f/xf86-video-ati/tree/src/radeon_bios.c?h=r128-support#n347

Register 0x0060 is not in r128_reg.h, but if I add it by hand, I see the same behaviour that I saw when it was using 0x0068: X crashes as soon as the driver tries to read or write it. When you wrote the r128-support branch, do you remember if the card you tested was VGA or DVI? Also, if it makes a difference, VGA is the secondary port on my card.
Comment 9 Connor Behan 2014-05-14 04:14:43 UTC
Created attachment 99005 [details] [review]
Failed attempt at DDC

This is the patch I tried based on r128-support. The crash happens after the first OUTREG() macro that involves the DDCReg. If I remove that fancy stuff and just call xf86DoEDID_DDC2() right away, then the crash happens after the first INREG() macro in R128I2CPutBits().
Comment 10 Connor Behan 2014-05-16 01:46:10 UTC
I managed to get r128-support working with the current X server (RAC removal, pciTag removal, new vgahw API and new screen API). In this driver, the line http://cgit.freedesktop.org/~agd5f/xf86-video-ati/tree/src/radeon_output.c?h=r128-support#n272 succeeds. I have checked the bits and it is OUTREG(0x0060, INREG(0x0060) & ~0x3).

However, if I try this in the r128 driver (my failed attempt patch), the exact same call causes a crash. I need to figure out this part first: how can a call to OUTREG() crash the X server? I have checked and info->PciInfo, info->MMIOAddr and info->MMIOSize are the same in both cases.

Even if I figure this out, it will not be enough for a DDC detection. When I run r128-support, there is no crash, but xf86DoEDID_DDC2() tells me that the VGA monitor is not plugged in even when it is.
Comment 11 Alex Deucher 2014-05-16 03:17:44 UTC
The radeon vbios has a table where the driver can look up the pad registers and bits used for ddc on specific connectors.  R128 does not.  I think it used a hardcoded configuration for each connector.  R128 seems to have two gpio pads that are used for displays which are controlled by these registers:

#define R128_GPIO_MONID                   0x0068
#       define R128_GPIO_MONID_A_0        (1 <<  0)
#       define R128_GPIO_MONID_A_1        (1 <<  1)
#       define R128_GPIO_MONID_A_2        (1 <<  2)
#       define R128_GPIO_MONID_A_3        (1 <<  3)
#       define R128_GPIO_MONID_Y_0        (1 <<  8)
#       define R128_GPIO_MONID_Y_1        (1 <<  9)
#       define R128_GPIO_MONID_Y_2        (1 << 10)
#       define R128_GPIO_MONID_Y_3        (1 << 11)
#       define R128_GPIO_MONID_EN_0       (1 << 16)
#       define R128_GPIO_MONID_EN_1       (1 << 17)
#       define R128_GPIO_MONID_EN_2       (1 << 18)
#       define R128_GPIO_MONID_EN_3       (1 << 19)
#       define R128_GPIO_MONID_MASK_0     (1 << 24)
#       define R128_GPIO_MONID_MASK_1     (1 << 25)
#       define R128_GPIO_MONID_MASK_2     (1 << 26)
#       define R128_GPIO_MONID_MASK_3     (1 << 27)
#define R128_GPIO_MONIDB                  0x006c

The current r128 code sets up R128_GPIO_MONID as the register used.  See R128I2cInit().  The question is whether VGA uses R128_GPIO_MONID or R128_GPIO_MONIDB and whether it uses the same bits for read and write.  Since most R128 boards only had one connector, it's likely they all used the same ddc configuration.  Have you tried the current r128 i2c setup for vga rather than dvi?
Comment 12 Michel Dänzer 2014-05-16 03:19:23 UTC
(In reply to comment #10)
> OUTREG(0x0060, INREG(0x0060) & ~0x3).
> 
> However, if I try this in the r128 driver (my failed attempt patch), the
> exact same call causes a crash. I need to figure out this part first: how
> can a call to OUTREG() crash the X server?

How does it crash? Is it really the OUTREG() that crashes, not the INREG()?
Comment 13 Alex Deucher 2014-05-16 03:30:35 UTC
(In reply to comment #8)
> (In reply to comment #7)
> > As for monitor detection, you need to sort out how the ddc lines are wired
> > up to the connectors.  IIRC, r128 only implemented ddc support for DVI, so
> > for VGA, you'd need to figure out what ddc line (basically just a gpio pad)
> > it uses and what bits are used for read and write.  It should be pretty
> > similar to radeon.  If you can't figure it out from trial and error, you can
> > try and snoop what the vbios is doing when you issue a vbe ddc command.
> 
> If r128 cards use a gpio pad for VGA, doesn't that mean they do implement
> ddc?
> 
> I looket at which register is used by radeon. It appears to be
> RADEON_GPIO_VGA_DDC at offset 0x0060. And this is in a part of the code that
> specifically refers to r128:
> http://cgit.freedesktop.org/~agd5f/xf86-video-ati/tree/src/radeon_bios.
> c?h=r128-support#n347
> 
> Register 0x0060 is not in r128_reg.h, but if I add it by hand, I see the
> same behaviour that I saw when it was using 0x0068: X crashes as soon as the
> driver tries to read or write it. When you wrote the r128-support branch, do
> you remember if the card you tested was VGA or DVI? Also, if it makes a
> difference, VGA is the secondary port on my card.

I think my code is wrong.  You should probably use R128_GPIO_MONID (0x68) or R128_GPIO_MONIDB (0x6c) rather than RADEON_GPIO_DVI_DDC or RADEON_GPIO_VGA_DDC. and use the R128_GPIO_MONID_*_3 and R128_GPIO_MONID_*_0 bits rather the bits used on radeon.  Basically just follow the logic in the existing R128I2c*() functions.
Comment 14 Connor Behan 2014-05-17 00:22:11 UTC
(In reply to comment #11)
> The current r128 code sets up R128_GPIO_MONID as the register used.  See
> R128I2cInit().  The question is whether VGA uses R128_GPIO_MONID or
> R128_GPIO_MONIDB and whether it uses the same bits for read and write. 
> Since most R128 boards only had one connector, it's likely they all used the
> same ddc configuration.  Have you tried the current r128 i2c setup for vga
> rather than dvi?

The current r128 driver (6.9.2) only uses the i2c bus if info->isDFP is TRUE. I tried removing this check so that it tries DDC on my non-DFP card and this causes a crash. I also tried setting info->DDCReg to R128_GPIO_MONIDB instead of R128_GPIO_MONID and the result is the same.
Comment 15 Connor Behan 2014-05-17 00:28:07 UTC
Created attachment 99185 [details]
Log with crash
Comment 16 Connor Behan 2014-05-17 00:29:30 UTC
(In reply to comment #12)
> (In reply to comment #10)
> > OUTREG(0x0060, INREG(0x0060) & ~0x3).
> > 
> > However, if I try this in the r128 driver (my failed attempt patch), the
> > exact same call causes a crash. I need to figure out this part first: how
> > can a call to OUTREG() crash the X server?
> 
> How does it crash? Is it really the OUTREG() that crashes, not the INREG()?

Yes, it crashes even for an OUTREG() call that doesn't have INREG() inside. I haven't tried a full debug yet but the log is attached.
Comment 17 Michel Dänzer 2014-05-17 02:28:59 UTC
(In reply to comment #15)
> (EE) Segmentation fault at address 0x68

It's trying to dereference a pointer to address 0x68. I guess R128MMIO is NULL?
Comment 18 Connor Behan 2014-05-23 09:45:26 UTC
(In reply to comment #17)
> (In reply to comment #15)
> > (EE) Segmentation fault at address 0x68
> 
> It's trying to dereference a pointer to address 0x68. I guess R128MMIO is
> NULL?

You're right *facepalm*. The MMIO (but not the FB) needs to be mapped when those functions are called. However, it was being unmapped too early (at the end of R128PreInitConfig instead of R128PreInit).

Now that this is fixed I can use the GPIO registers just as well as I could with xf86-video-ati. So it should be easier to detect monitors now.
Comment 19 Connor Behan 2014-05-24 08:51:59 UTC
Created attachment 99694 [details] [review]
Updated patch
Comment 20 Connor Behan 2014-05-24 09:02:23 UTC
(In reply to comment #13)
> I think my code is wrong.  You should probably use R128_GPIO_MONID (0x68) or
> R128_GPIO_MONIDB (0x6c) rather than RADEON_GPIO_DVI_DDC or
> RADEON_GPIO_VGA_DDC. and use the R128_GPIO_MONID_*_3 and R128_GPIO_MONID_*_0
> bits rather the bits used on radeon.  Basically just follow the logic in the
> existing R128I2c*() functions.

The most recent patch tries calling xf86DoEDID_DDC2() for all types of cards with the existing I2c logic. This no longer crashes but it still fails to detect the VGA monitor. xf86DoEDID_DDC2() returns NULL no matter what so I'll have to do some experimentation. I have three questions about snooping the vbe ddc commands.

1. By this, do you mean vbeDoEDID()?

2. This detects the monitor perfectly on my card, so why can't the driver just use this instead of implementing the I2c stuff? Is it because xf86DoEDID_DDC2() is platform independent but vbeDoEDID() is x86 only?

3. To snoop what the vbios is doing, would this consist of dumping the contents of suspicious registers before and after vbeDoEDID() to see what changed? Or should I try to dump the contents *during* that function (i.e. patching the vbe module or running radeonreg in a loop)?
Comment 21 Connor Behan 2014-05-25 19:54:26 UTC
Created attachment 99793 [details] [review]
Patch with detection

I could not find changes to hardware state before and after the vbe stuff but I found an easier way to detect monitors. That is a simple call to xf86I2CProbeAddress(). In other words, xf86DoEDID_DDC2() was getting far enough to tell that there was a monitor. It only returned NULL because it was not getting a valid EDID.

Detecting the valid modes with EDID would be nice, but I think this solves the most important problem. The driver now knows whether a monitor is plugged in and hence whether to drive Crtc2. If you agree, I will send this patch to the list.
Comment 22 Alex Deucher 2014-05-27 14:20:06 UTC
(In reply to comment #20)
> (In reply to comment #13)
> > I think my code is wrong.  You should probably use R128_GPIO_MONID (0x68) or
> > R128_GPIO_MONIDB (0x6c) rather than RADEON_GPIO_DVI_DDC or
> > RADEON_GPIO_VGA_DDC. and use the R128_GPIO_MONID_*_3 and R128_GPIO_MONID_*_0
> > bits rather the bits used on radeon.  Basically just follow the logic in the
> > existing R128I2c*() functions.
> 
> The most recent patch tries calling xf86DoEDID_DDC2() for all types of cards
> with the existing I2c logic. This no longer crashes but it still fails to
> detect the VGA monitor. xf86DoEDID_DDC2() returns NULL no matter what so
> I'll have to do some experimentation. I have three questions about snooping
> the vbe ddc commands.
> 
> 1. By this, do you mean vbeDoEDID()?

Matthew Garret had a hacked up version of vbetool that would trace mmio using libx86 when executing so you could see what registers were being used.  You could probably hack vbeDoEDID() and libx86 used by the xserver to do something similar.

> 
> 2. This detects the monitor perfectly on my card, so why can't the driver
> just use this instead of implementing the I2c stuff? Is it because
> xf86DoEDID_DDC2() is platform independent but vbeDoEDID() is x86 only?

vbeDoEDID() uses vbe which may be problematic on non-x86 platforms and it has no concept of multiple monitors so if you have a card with more than one display attached, it's not clear which EDID would be returned.  For R128, it may be fine since I think most (all?) dualhead capable cards were laptops so the other display was an LVDS panel and you could get the panel info from the vbios.

> 
> 3. To snoop what the vbios is doing, would this consist of dumping the
> contents of suspicious registers before and after vbeDoEDID() to see what
> changed? Or should I try to dump the contents *during* that function (i.e.
> patching the vbe module or running radeonreg in a loop)?

You just execute the vbe edid fetch function using vbetool and it would dump the mmio commands to file which you could then look through to see what was happening.
Comment 23 Alex Deucher 2014-05-27 14:23:12 UTC
(In reply to comment #21)
> Created attachment 99793 [details] [review] [review]
> Patch with detection
> 
> I could not find changes to hardware state before and after the vbe stuff
> but I found an easier way to detect monitors. That is a simple call to
> xf86I2CProbeAddress(). In other words, xf86DoEDID_DDC2() was getting far
> enough to tell that there was a monitor. It only returned NULL because it
> was not getting a valid EDID.

There wouldn't be any changes in the state before and after.  You'd need to trace the actual EDID fetch process to see which gpios are used for i2c.

> 
> Detecting the valid modes with EDID would be nice, but I think this solves
> the most important problem. The driver now knows whether a monitor is
> plugged in and hence whether to drive Crtc2. If you agree, I will send this
> patch to the list.

Does it actually change when you have a monitor vs. not?  If probing works, you are probably pretty close to getting full i2c working.
Comment 24 Connor Behan 2014-05-28 01:28:35 UTC
(In reply to comment #23)
> > Detecting the valid modes with EDID would be nice, but I think this solves
> > the most important problem. The driver now knows whether a monitor is
> > plugged in and hence whether to drive Crtc2. If you agree, I will send this
> > patch to the list.
> 
> Does it actually change when you have a monitor vs. not?  If probing works,
> you are probably pretty close to getting full i2c working.

Yes. And you are right, i2c was almost working. I got it working by trying all 16 combinations for the bits in a loop. The valid pair was (R128_GPIO_MONID_*_3, R128_GPIO_MONID_*_1) instead of (R128_GPIO_MONID_*_3, R128_GPIO_MONID_*_0).

Should I just change the *_0 bits to *_1 or should I put that inside a powerpc check? I think the old code might've been for Apple Sense.
Comment 25 Alex Deucher 2014-05-28 02:08:34 UTC
(In reply to comment #24)
> (In reply to comment #23)
> > > Detecting the valid modes with EDID would be nice, but I think this solves
> > > the most important problem. The driver now knows whether a monitor is
> > > plugged in and hence whether to drive Crtc2. If you agree, I will send this
> > > patch to the list.
> > 
> > Does it actually change when you have a monitor vs. not?  If probing works,
> > you are probably pretty close to getting full i2c working.
> 
> Yes. And you are right, i2c was almost working. I got it working by trying
> all 16 combinations for the bits in a loop. The valid pair was
> (R128_GPIO_MONID_*_3, R128_GPIO_MONID_*_1) instead of (R128_GPIO_MONID_*_3,
> R128_GPIO_MONID_*_0).

Was it the same register (R128_GPIO_MONID) as well?

> 
> Should I just change the *_0 bits to *_1 or should I put that inside a
> powerpc check? I think the old code might've been for Apple Sense.

I suspect the pin mapping is different for DVI vs. VGA.  Take the RADEONI2CBusRec from radeon and set the masks and registers differently for VGA vs. DVI.  Then you can use the appropriate registers/masks for each connector when you want to probe the monitor.
Comment 26 Alex Deucher 2014-05-28 02:22:46 UTC
(In reply to comment #25)
> 
> > 
> > Should I just change the *_0 bits to *_1 or should I put that inside a
> > powerpc check? I think the old code might've been for Apple Sense.
> 
> I suspect the pin mapping is different for DVI vs. VGA.  Take the
> RADEONI2CBusRec from radeon and set the masks and registers differently for
> VGA vs. DVI.  Then you can use the appropriate registers/masks for each
> connector when you want to probe the monitor.

See RADEONI2CGetBits() and RADEONI2CPutBits() in radeon for example.  Since it seems like r128 always use the same registers for clk and data, you could probably skip the separate steps for clk and data and just combine the masks like the current r128 i2c code does.
Comment 27 Connor Behan 2014-05-28 08:58:11 UTC
Created attachment 100021 [details] [review]
Patch with better detection

This implements the suggestions above.
Comment 28 Alex Deucher 2014-05-28 16:38:46 UTC
+static R128MonitorType R128DisplayDDCConnected(xf86OutputPtr output)
+{
+    ScrnInfoPtr pScrn = output->scrn;
+    R128InfoPtr info = R128PTR(pScrn);
+    unsigned char *R128MMIO = info->MMIO;
+    R128OutputPrivatePtr r128_output = output->driver_private;
+
+    R128MonitorType MonType = MT_NONE;
+    xf86MonPtr *MonInfo = &output->MonInfo;
+    CARD32 mask1, mask2;
+
+    if (r128_output->type == OUTPUT_LVDS) {
+        return MT_LCD;
+    } else if (r128_output->type == OUTPUT_VGA) {
+        mask1 = R128_GPIO_MONID_MASK_1 | R128_GPIO_MONID_MASK_3;
+	mask2 = R128_GPIO_MONID_A_1    | R128_GPIO_MONID_A_3;
+    } else {
+        mask1 = R128_GPIO_MONID_MASK_0 | R128_GPIO_MONID_MASK_3;
+	mask2 = R128_GPIO_MONID_A_0    | R128_GPIO_MONID_A_3;
+    }
+
+    if (r128_output->pI2CBus) {
+	/* XXX: Radeon does something here to appease old monitors. */
+	OUTREG(info->DDCReg, INREG(info->DDCReg)  |  mask1);
+	OUTREG(info->DDCReg, INREG(info->DDCReg)  & ~mask2);

Maybe store the ddc reg for each output type in the R128I2CBus structure as well in case there are other configurations.  Probably not a big deal though.

+	*MonInfo = xf86DoEDID_DDC2(XF86_SCRN_ARG(pScrn), r128_output->pI2CBus);
+    } else {
+        xf86DrvMsg(pScrn->scrnIndex, X_WARNING, "DDC2/I2C is not properly initialized\n");
+        return MT_NONE;
+    }
+
+    if (*MonInfo) {
+        if ((*MonInfo)->rawData[0x14] & 0x80) {
+            if (INREG(R128_FP_GEN_CNTL) & R128_FP_TDMS_EN)
+                MonType = MT_DFP;
+            else
+                MonType = MT_LCD;
+	} else {
+            MonType = MT_CRT;
+        }
+    }

I think this part is should be restructured.  I think something like the following makes more sense:

if (*MonInfo) {
    if (r128_output->type == OUTPUT_VGA) {
        MonType = MT_CRT;
    } else {
        if ((*MonInfo)->rawData[0x14] & 0x80)
            MonType = MT_DFP;
        else
            MonType = MT_CRT;
    }
}

MT_LCD is only for LVDS so there's no chance of that showing up on a DVI or VGA connector.  Additionally, FP_GEN_CNTL.FP_TDMS_EN will only be set if the output was previously enabled (by the driver or vbios).  If not, that bit won't be set.  All you should need is the digital bit in the EDID which should always be set for digital DVI displays.  I don't know whether there are any boards with DVI-I connectors (analog and digital encoders wired to the same connector), but this should be fine either way.

+
+    return MonType;
+}
Comment 29 Connor Behan 2014-05-28 23:50:36 UTC
Created attachment 100064 [details] [review]
Patch 05-28

Done, thanks!
Comment 30 Connor Behan 2014-07-09 18:50:59 UTC
Fixed by xf86-video-r128 3ed5035074540785f820906529fcce3148e0b387 and follow-up commits.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.