Currently the r300 driver(just an example) reports this vendor and renderer info:
OpenGL vendor string: DRI R300 Project
OpenGL renderer string: Mesa DRI R300 20090101 x86/MMX/SSE2 TCL DRI2
OpenGL version string: 1.4 Mesa 7.6-devel
Wine uses the vendor and renderer strings to make a rough guess which card is in use to report a PCI vendor and device ID to D3D apps. The renderer string is the same for all cards run by the r300 driver, which makes it hard to separate a radeon 9500 from a radeon X1950.
We know that GL apps should not look at the strings to make decisions, but we have to do this for a few reasons:
1) Very few D3D apps are broken and do use the PCI IDs and not the capability flags to select render paths(Command & Conquers generals)
2) user reporting: Users get confused if they have a e.g. X1600 in their system, and their game pretends it is running on a 9500. We often get false bug reports where people think this misdetection causes their app to fail.
3) Some games don't make vital decisions based on the IDs, but enable / disable extra features. E.g. Age of Empires 3 enables very high shader support on Geforce 7 cards(even low level ones), but disables that on Geforce 6 cards(even high level ones). There's no technical reason to do that, but the game does it(I guess some ad deal with Nvidia)
4) We don't want to look at the lspci output because it is not portable, and we may report a card that does not match our features. Currently Mesa does not support GLSL on r300 cards, so Wine only offers 1.x shaders(that will change). So we can't report a Radeon 9500 at the moment because the capabilities do not match that card. (We use the GL extensions to make broad guess and the vendor string for a fine-grained selection).
From the current info mesa provides we can find out that we have an ATI card(search for "R300), but we cannot separate r300, r400 and r500 cards, or subtypes or mobility cards.
A renderer string like the one fglrx reports would be more helpful:
OpenGL renderer string: ATI Mobility Radeon X1600
I don't need exactly the same format, just something like "Mobility Radeon X1600" on that card in the string.
Already done for Gallium. Don't know if it'll get done for classic Mesa.
Up to the driver.
Created attachment 26318 [details] [review]
The simplest thing that could possibly work
Would the attached patch (against radeon-rewrite) be fine for you? It should provide all the information in a way which is least intrusive to the driver: It provides the CHIP_FAMILY (which is what the driver itself bases decisions on when it comes to choosing between different code paths) as well as the PCI device ID as a hex value.
Of course one could also pull in the PCI ID database, but somehow I don't fancy the idea of duplicating that database in yet another location.
That patch would work I guess.
However, we need the card names(ie, "radeon X1600") instead of the chip type. We could of course map chip->name, although I think this card specific knowledge is better placed in the driver than WineD3D(which shouldn't have to bother about graphics hardware in theory, in a world with well-behaved Windows apps). It's a minor issue for me though - I can live with both solutions.
Okay, so I thought about it a bit more.
1) We cannot actually provide fglrx-style strings from within the 3D stack. We don't have the fine-grained data about which card the GPU's stuck on, and unlike Intel (which only has about a dozen supported chipsets) or nVidia/nouveau (where there's actually a VBIOS string with the card's marketing name) there's no trivial way to craft it. So we have to report the chip family instead.
2) Unlike fglrx (and, I assume, nvidia,) we don't register support for GL extensions that we don't do in hardware. Classic Mesa has some pitfalls still (GL_SELECT is one such unfun case) but for the most part, if an extension's listed, it should be accelerated. So, for any driver that's !fglrx && !nvidia, you shouldn't have to use GL_RENDERER much, because basing your detection on GL_VERSION and the extension list should be sufficient.
(In reply to comment #4)
> That patch would work I guess.
> However, we need the card names(ie, "radeon X1600") instead of the chip type.
Why do you need the chip names? The hw capabilities are based on family (r3xx, r4xx, r5xx) rather than the marketing names (9500, X1600, etc.).
> 1) We cannot actually provide fglrx-style strings from within the 3D stack.
Fair enough I guess. If I test for "X1600" or "R5xx" to find some roughly matching GPU type shouldn't matter too much to me.
> 2) Unlike fglrx (and, I assume, nvidia,) we don't register support for GL
> extensions that we don't do in hardware.
I guess that's ok for the scope of this bug as well. On fglrx we catch the only partially working texture_np2. It can cause some other troubles though, I'll file a separate bug report for this.
(In reply to comment #6)
> Why do you need the chip names? The hw capabilities are based on family
> (r3xx,r4xx, r5xx) rather than the marketing names (9500, X1600, etc.).
To report them to the Windows app. Windows reports the marketing names, so that's what Windows apps expect, and that's what our users expect to see when their game tells them what card it has found.
But I think in terms of the driver this point is mostly moot, because (1) we currently do not return a proper string(only PCI ID), the renderer string we report is "Direct3D HAL", and (2) we map GL info -> PCI ID, and would later map PCI ID -> String. There are some games that are confused by this "Direct3D HAL" that is reported.
The bottom line: I guess the chip family will work for me.
I have pushed the patch to radeon-rewrite. Considering that Mesa 7.5 should only receive bugfixes and radeon-rewrite will be merged soon, there seems to be agreement that this isn't going to be backported.
I will implement parsing those strings in Wine once I am back from WWDC.
on Feb 26, 2017 at 14:49:09.
(provided by the Example extension).