Bug 21886 - Feature request: More detailed GL_RENDERER strings
Summary: Feature request: More detailed GL_RENDERER strings
Status: RESOLVED FIXED
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/DRI/r300 (show other bugs)
Version: unspecified
Hardware: Other All
: medium normal
Assignee: Default DRI bug account
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-05-22 15:48 UTC by Stefan Dösinger
Modified: 2009-06-01 11:38 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments
The simplest thing that could possibly work (3.98 KB, patch)
2009-05-31 07:38 UTC, Nicolai Hähnle
Details | Splinter Review

Description Stefan Dösinger 2009-05-22 15:48:45 UTC
Currently the r300 driver(just an example) reports this vendor and renderer info:

OpenGL vendor string: DRI R300 Project
OpenGL renderer string: Mesa DRI R300 20090101 x86/MMX/SSE2 TCL DRI2
OpenGL version string: 1.4 Mesa 7.6-devel

Wine uses the vendor and renderer strings to make a rough guess which card is in use to report a PCI vendor and device ID to D3D apps. The renderer string is the same for all cards run by the r300 driver, which makes it hard to separate a radeon 9500 from a radeon X1950.

We know that GL apps should not look at the strings to make decisions, but we have to do this for a few reasons:

1) Very few D3D apps are broken and do use the PCI IDs and not the capability flags to select render paths(Command & Conquers generals)

2) user reporting: Users get confused if they have a e.g. X1600 in their system, and their game pretends it is running on a 9500. We often get false bug reports where people think this misdetection causes their app to fail.

3) Some games don't make vital decisions based on the IDs, but enable / disable extra features. E.g. Age of Empires 3 enables very high shader support on Geforce 7 cards(even low level ones), but disables that on Geforce 6 cards(even high level ones). There's no technical reason to do that, but the game does it(I guess some ad deal with Nvidia)

4) We don't want to look at the lspci output because it is not portable, and we may report a card that does not match our features. Currently Mesa does not support GLSL on r300 cards, so Wine only offers 1.x shaders(that will change). So we can't report a Radeon 9500 at the moment because the capabilities do not match that card. (We use the GL extensions to make broad guess and the vendor string for a fine-grained selection).

From the current info mesa provides we can find out that we have an ATI card(search for "R300), but we cannot separate r300, r400 and r500 cards, or subtypes or mobility cards.

A renderer string like the one fglrx reports would be more helpful:
OpenGL renderer string: ATI Mobility Radeon X1600

I don't need exactly the same format, just something like "Mobility Radeon X1600" on that card in the string.
Comment 1 Corbin Simpson 2009-05-22 23:12:46 UTC
Already done for Gallium. Don't know if it'll get done for classic Mesa.
Comment 2 Michel Dänzer 2009-05-23 04:00:40 UTC
Up to the driver.
Comment 3 Nicolai Hähnle 2009-05-31 07:38:02 UTC
Created attachment 26318 [details] [review]
The simplest thing that could possibly work

Would the attached patch (against radeon-rewrite) be fine for you? It should provide all the information in a way which is least intrusive to the driver: It provides the CHIP_FAMILY (which is what the driver itself bases decisions on when it comes to choosing between different code paths) as well as the PCI device ID as a hex value.

Of course one could also pull in the PCI ID database, but somehow I don't fancy the idea of duplicating that database in yet another location.
Comment 4 Stefan Dösinger 2009-05-31 15:24:45 UTC
That patch would work I guess.

However, we need the card names(ie, "radeon X1600") instead of the chip type. We could of course map chip->name, although I think this card specific knowledge is better placed in the driver than WineD3D(which shouldn't have to bother about graphics hardware in theory, in a world with well-behaved Windows apps). It's a minor issue for me though - I can live with both solutions.

Comment 5 Corbin Simpson 2009-05-31 16:49:38 UTC
Okay, so I thought about it a bit more.

1) We cannot actually provide fglrx-style strings from within the 3D stack. We don't have the fine-grained data about which card the GPU's stuck on, and unlike Intel (which only has about a dozen supported chipsets) or nVidia/nouveau (where there's actually a VBIOS string with the card's marketing name) there's no trivial way to craft it. So we have to report the chip family instead.

2) Unlike fglrx (and, I assume, nvidia,) we don't register support for GL extensions that we don't do in hardware. Classic Mesa has some pitfalls still (GL_SELECT is one such unfun case) but for the most part, if an extension's listed, it should be accelerated. So, for any driver that's !fglrx && !nvidia, you shouldn't have to use GL_RENDERER much, because basing your detection on GL_VERSION and the extension list should be sufficient.
Comment 6 Alex Deucher 2009-05-31 18:55:29 UTC
(In reply to comment #4)
> That patch would work I guess.
> 
> However, we need the card names(ie, "radeon X1600") instead of the chip type.

Why do you need the chip names?  The hw capabilities are based on family (r3xx, r4xx, r5xx) rather than the marketing names (9500, X1600, etc.).
Comment 7 Stefan Dösinger 2009-06-01 01:42:00 UTC
> 1) We cannot actually provide fglrx-style strings from within the 3D stack.
Fair enough I guess. If I test for "X1600" or "R5xx" to find some roughly matching GPU type shouldn't matter too much to me.

> 2) Unlike fglrx (and, I assume, nvidia,) we don't register support for GL
> extensions that we don't do in hardware.
I guess that's ok for the scope of this bug as well. On fglrx we catch the only partially working texture_np2. It can cause some other troubles though, I'll file a separate bug report for this.

(In reply to comment #6)
> Why do you need the chip names?  The hw capabilities are based on family
> (r3xx,r4xx, r5xx) rather than the marketing names (9500, X1600, etc.).
To report them to the Windows app. Windows reports the marketing names, so that's what Windows apps expect, and that's what our users expect to see when their game tells them what card it has found.

But I think in terms of the driver this point is mostly moot, because (1) we currently do not return a proper string(only PCI ID), the renderer string we report is "Direct3D HAL", and (2) we map GL info -> PCI ID, and would later map PCI ID -> String. There are some games that are confused by this "Direct3D HAL" that is reported.

The bottom line: I guess the chip family will work for me.
Comment 8 Nicolai Hähnle 2009-06-01 10:41:11 UTC
I have pushed the patch to radeon-rewrite. Considering that Mesa 7.5 should only receive bugfixes and radeon-rewrite will be merged soon, there seems to be agreement that this isn't going to be backported.
Comment 9 Stefan Dösinger 2009-06-01 11:38:40 UTC
Cool, thanks!

I will implement parsing those strings in Wine once I am back from WWDC.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.