Bug 30172

Summary: GL_EXT_framebuffer_blit function required
Product: Mesa Reporter: Alex Buell <alex.buell>
Component: Mesa coreAssignee: mesa-dev
Status: RESOLVED NOTOURBUG QA Contact:
Severity: normal    
Priority: medium    
Version: git   
Hardware: All   
OS: All   
Whiteboard:
i915 platform: i915 features:

Description Alex Buell 2010-09-13 13:38:31 UTC
Couldn't find Drivers/Gallium/Nouveau in the Component list so put this in here.

It would be nice to get GL_EXT_framebuffer_blit implemented for the Nouveau driver, as some programs I have tested won't work without this function.
Comment 1 Luca Barbieri 2010-09-13 20:53:58 UTC
EXT_framebuffer_blit is supported with any Gallium driver, by the state tracker itself.

Perhaps you mean the nouveau_vieux DRI driver?

If not, what are exactly the issues you are experiencing?
Comment 2 Alex Buell 2010-09-14 00:43:25 UTC
I am getting this:

$ scons
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
g++ -o glsl_toon glsl_toon.o build/Shader.o build/ShaderProgram.o build/Timer.o build/VertexBuffer.o build/Mesh.o build/Light.o build/Texture.o build/FrameTimer.o build/FrameBuffer.o -lSDL -lSDL_image -lGL -lGLU
build/FrameBuffer.o: In function `FrameBuffer::BlitTo(FrameBuffer*, unsigned int, unsigned int)':
FrameBuffer.cpp:(.text+0x1373): undefined reference to `glBlitFramebufferEXT'
collect2: ld returned 1 exit status
scons: *** [glsl_toon] Error 1
scons: building terminated because of errors.

Grepping the includes shows that it is definitely defined in the includes so probably it's not in the libraries.
Comment 3 Alex Buell 2010-09-14 00:47:03 UTC
As far as I am aware I'm using the nouveau_dri.so driver with X. nouveau_vieux_dri.so is built but not used. 

(--) PCI:*(0:1:0:0) 10de:0324:1028:015f nVidia Corporation NV34M [GeForce FX Go5
200 64M] rev 161, Mem @ 0xfc000000/16777216, 0xd0000000/268435456, BIOS @ 0x????
????/131072

(II) Loading /usr/lib/xorg/modules/extensions/libdri.so
(II) Module dri: vendor="X.Org Foundation"
	compiled for 1.7.7, module version = 1.0.0
	ABI class: X.Org Server Extension, version 2.0
(II) Loading extension XFree86-DRI
(II) LoadModule: "dri2"
(II) Loading /usr/lib/xorg/modules/extensions/libdri2.so
(II) Module dri2: vendor="X.Org Foundation"
	compiled for 1.7.7, module version = 1.1.0
	ABI class: X.Org Server Extension, version 2.0
(II) Loading extension DRI2
(II) LoadModule: "nouveau"
(II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so
(II) Module nouveau: vendor="X.Org Foundation"
	compiled for 1.7.7, module version = 0.0.16
	Module class: X.Org Video Driver
	ABI class: X.Org Video Driver, version 6.0
(II) LoadModule: "dri"
(II) Reloading /usr/lib/xorg/modules/extensions/libdri.so
(II) NOUVEAU(0): Loaded DRI module
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 9, (OK)
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 9, (OK)
drmOpenByBusid: Searching for BusID pci:0000:01:00.0
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 9, (OK)
drmOpenByBusid: drmOpenMinor returns 9
drmOpenByBusid: drmGetBusid reports pci:0000:01:00.0
(II) [drm] DRM interface version 1.3
(II) [drm] DRM open master succeeded.
(--) NOUVEAU(0): Chipset: "NVIDIA NV34"

(II) NOUVEAU(0): Opened GPU channel 1
(II) NOUVEAU(0): [DRI2] Setup complete
(II) NOUVEAU(0): GART: 128MiB available
(II) NOUVEAU(0): GART: Allocated 16MiB as a scratch buffer
(II) EXA(0): Driver allocated offscreen pixmaps
(II) EXA(0): Driver registered support for the following operations:
(II)         Solid
(II)         Copy
(II)         Composite (RENDER acceleration)
(II)         UploadToScreen
(II)         DownloadFromScreen
(==) NOUVEAU(0): Backing store disabled
(==) NOUVEAU(0): Silken mouse enabled
(II) NOUVEAU(0): [XvMC] Associated with NV30 texture adapter.
(II) NOUVEAU(0): [XvMC] Extension initialized.
(**) NOUVEAU(0): DPMS enabled
(II) NOUVEAU(0): RandR 1.2 enabled, ignore the following RandR disabled message.
(WW) NOUVEAU(0): Option "DPI" is not used
(--) RandR disabled
(II) Initializing built-in extension Generic Event Extension
(II) Initializing built-in extension SHAPE
(II) Initializing built-in extension MIT-SHM
(II) Initializing built-in extension XInputExtension
(II) Initializing built-in extension XTEST
(II) Initializing built-in extension BIG-REQUESTS
(II) Initializing built-in extension SYNC
(II) Initializing built-in extension XKEYBOARD
(II) Initializing built-in extension XC-MISC
(II) Initializing built-in extension XINERAMA
(II) Initializing built-in extension XFIXES
(II) Initializing built-in extension RENDER
(II) Initializing built-in extension RANDR
(II) Initializing built-in extension COMPOSITE
(II) Initializing built-in extension DAMAGE
(II) AIGLX: enabled GLX_MESA_copy_sub_buffer
(II) AIGLX: enabled GLX_SGI_make_current_read
(II) AIGLX: enabled GLX_SGI_swap_control and GLX_MESA_swap_control
(II) AIGLX: GLX_EXT_texture_from_pixmap backed by buffer objects
(II) AIGLX: Loaded and initialized /usr/lib/dri/nouveau_dri.so
(II) GLX: Initialized DRI2 GL provider for screen 0
(II) NOUVEAU(0): NVEnterVT is called.
(II) NOUVEAU(0): Setting screen physical size to 423 x 317
resize called 1600 1200
Comment 4 Luca Barbieri 2010-09-14 01:19:23 UTC
Indeed Mesa only exports glBlitFramebuffer and not glBlitFramebufferEXT.

It seems this was done intentionally by adding static_dispatch="false" to its definition.

I'm not sure what are the rules for choosing whether to export an entry point as a function and thus whether this was an appropriate choice or not.

nVidia seems to export it, so probably Mesa should too.
Comment 5 Marek Olšák 2010-09-14 01:35:29 UTC
I believe glxGetProcAddress is the right way to obtain the function pointer to glBlitFramebufferEXT, not through static linking.
Comment 6 Luca Barbieri 2010-09-14 01:40:21 UTC
Actually, both ATI and nVidia export it, so perhaps we should too, so that applications written for those drivers work on Mesa.
Comment 7 Michel Dänzer 2010-09-14 04:07:26 UTC
Resolving per comment #5. The libGL ABI is defined at

http://www.opengl.org/registry/ABI/
Comment 8 Alex Buell 2010-09-14 07:14:20 UTC
But ATI & NVidia exports it.
Comment 9 Alex Buell 2010-09-14 09:44:41 UTC
I just looked at the docs. The docs says th
Comment 10 Alex Buell 2010-09-14 09:48:51 UTC
glXGetProcAddress, according to the docs, shouldn't be used any more. They recommend using glXGetProcAddressARB instead. Just FYI.
Comment 11 Michel Dänzer 2010-09-14 10:13:08 UTC
(In reply to comment #10)
> glXGetProcAddress, according to the docs, shouldn't be used any more. They
> recommend using glXGetProcAddressARB instead. Just FYI.

The Mesa libGL supports both.
Comment 12 Ian Romanick 2010-09-16 01:02:41 UTC
(In reply to comment #6)
> Actually, both ATI and nVidia export it, so perhaps we should too, so that
> applications written for those drivers work on Mesa.

Any application that assumes *ANY* extension function (except those in GL_ARB_multitexture) is available for static linking is broken.  Period.  They're called *EXTENSIONS* for a reason.
Comment 13 Alex Buell 2010-09-16 06:22:48 UTC
I can't find any example sources that obtains a function pointer to the desired function required, it makes it harder to fix the broken application code :S

For example:
 glBlitFramebufferEXT(0, 0, _width, _height, 0, 0, _width, _height, mask, filter);
Comment 14 Ian Romanick 2010-09-16 06:25:44 UTC
(In reply to comment #13)
> I can't find any example sources that obtains a function pointer to the desired
> function required, it makes it harder to fix the broken application code :S
> 
> For example:
>  glBlitFramebufferEXT(0, 0, _width, _height, 0, 0, _width, _height, mask,
> filter);

That's because everyone sensible using GLEW or some similar library that hides all the gory details and provides portability across operating systems.
Comment 15 Alex Buell 2010-09-16 06:58:30 UTC
But there are the ones who wants _all_ the GORY details - namely myself!
Comment 16 Luca Barbieri 2010-09-16 10:31:57 UTC
Perhaps a sensible resolution is to only export those symbols if a binary-only application is found requiring them, which works on another GL implementation.

Although I'm personally for preventive action, following the "liberal" part of "be conservative in what you do, be liberal in what you accept from others".

And yes, that does encourage applications to be broken, but the fault for that lies in AMD and nVidia.

At a more fundamental level, it lies in whoever reinvented the wheel by adding glXGetProcAddress instead of just using ELF weak symbols (or dlsym if you somehow insist on querying symbols by string).
Comment 17 Brian Paul 2010-09-16 12:08:29 UTC
(In reply to comment #16)
> Perhaps a sensible resolution is to only export those symbols if a binary-only
> application is found requiring them, which works on another GL implementation.

I agree that if there's an existing application that fails out of the box w/ Mesa but not NVIDIA or AMD for this reason, we should then export the function too.  There's no sense in punishing end users like that.


> Although I'm personally for preventive action, following the "liberal" part of
> "be conservative in what you do, be liberal in what you accept from others".
> 
> And yes, that does encourage applications to be broken, but the fault for that
> lies in AMD and nVidia.
> 
> At a more fundamental level, it lies in whoever reinvented the wheel by adding
> glXGetProcAddress instead of just using ELF weak symbols (or dlsym if you
> somehow insist on querying symbols by string).

I was involved in developing glXGetProcAddress.  Back then, I'm not sure that dlsym() was widely available on all the flavors of Unix having OpenGL.  There was also the possibility of using X/GLX on non-Unix systems.
Comment 18 Tom Fogal 2010-09-16 18:31:37 UTC
bugzilla-daemon@freedesktop.org writes:
> --- Comment #17 from Brian Paul <brianp@vmware.com> 2010-09-16 12:08:29 PDT -
> --
> (In reply to comment #16)
> > Perhaps a sensible resolution is to only export those symbols if
> > a binary-only application is found requiring them, which works on
> > another GL implementation.
>
> I agree that if there's an existing application that fails out of the
> box w/ Mesa but not NVIDIA or AMD for this reason, we should then
> export the functio n too.  There's no sense in punishing end users
> like that.

FWIW, as an app developer, I'd vote that OGL 1.2, GLX 1.3, and
ARB_multitexture be the only statically exported functions.  It sucks
that some apps break, but they'll be fixed eventually.

Further, this is the only behavior that I know is guaranteed, and I
think windows behaves as above anyway.  It gives me warm fuzzy feelings
when my nightly test builds prove something good -- i.e. in this case
that I'm accessing functions via fqn pointer, as designed, and not
relying on undefined behavior.

> > Although I'm personally for preventive action, following the
> > "liberal" part of "be conservative in what you do, be liberal in
> > what you accept from others" .
> >
> > And yes, that does encourage applications to be broken, but the
> > fault fort hat lies in AMD and nVidia.
> >
> > At a more fundamental level, it lies in whoever reinvented the
> > wheel by add ing glXGetProcAddress instead of just using ELF weak
> > symbols (or dlsym if you somehow insist on querying symbols by
> > string).

ELF weak symbols do not allow one to decide behavior (OpenGL library)
at runtime -- mangling aside.

> I was involved in developing glXGetProcAddress.  Back then, I'm not
> sure that dlsym() was widely available on all the flavors of Unix
> having OpenGL.  There was also the possibility of using X/GLX on
> non-Unix systems.

We *still* have to support systems that do not have dlsym  =(
Comment 19 Luca Barbieri 2010-09-16 21:18:31 UTC
> ELF weak symbols do not allow one to decide behavior (OpenGL library)
> at runtime -- mangling aside.

AFAIK, glXGetProcAddress also doesn't, because you can call it without a context bound, and thus libGL has no more information than the one it has at compilation time.

See http://dri.freedesktop.org/wiki/glXGetProcAddressNeverReturnsNULL

Note, BTW, that wglGetProcAddress returns context-dependent pointers, so it has some reason for existing (even though it's still an unreasonable burden for the programmer to have to use it).

> We *still* have to support systems that do not have dlsym  =(

I'm curious, how can such a system exist?
Surely a system with dynamic linking must have something like dlsym (perhaps with another name, like GetProcAddress in Windows).

And if dynamic linking is not supported, then you'll be limited to a single GL implementation and thus you don't need dlsym since you know exactly what is available (and it's better to fail at build time in this case).
Comment 20 Luca Barbieri 2010-09-16 21:46:49 UTC
FWIW, here is how to design an extension system which is actually usable.

Add a single function, which is the only function exported from libGL:
void* glGetFunctionTable(const char* vendor)

This function returns a pointer to the unsigned integer 0 if no extension from <vendor> is supported.
Otherwise, it returns a pointer to the vendor-specific (or ARB if "ARB", or core OpenGL if "GL") function table, and the unsigned integer it points to will be greater than 0.

This function table is context-dependent unless <vendor> starts with "^" (this would be used for GLX, for instance).
Also, the function pointers in the structure may change on any call to a GL function.

Now each vendor simply publishes an header with a C struct.

The struct starts with an unsigned integer denoting the size, and contains function pointers for all their extensions, plus booleans to tell whether the extension is supported as a whole.

New extensions are added at the end, and the reported size is increased accordingly.

Usage is as easy as possible, unlike the current GL method:

GLFunctionTableNV* nv = glGetFunctionTable("NV");

if((nv->size > offsetof(nv, NV_fence_supported)) && nv->NV_fence_supported)
{
  nv->GenFences(...)
}

A macro could further reduce the check to
if(GL_SUPPORTED(nv, NV_fence))

Also, performance is optimal, unlike the current GL method.

Ideally, functions should take a pointer to the GL context, so that it doesn't need to fetched from a thread-local variable.

A variant could be to have glGetFunctionTable only provide the context-independent tables, and having a GetContextFunctionTable function in the "GL" context-independent table.
Comment 21 Tom Fogal 2010-09-16 22:00:10 UTC
bugzilla-daemon@freedesktop.org writes:
> --- Comment #19 from Luca Barbieri <luca.barbieri@gmail.com> 2010-09-16 21:18
> :31 PDT ---
> > ELF weak symbols do not allow one to decide behavior (OpenGL
> > library) at runtime -- mangling aside.
>
> AFAIK, glXGetProcAddress also doesn't, because you can call it
> without a context bound, and thus libGL has no more information than
> the one it has at compilation time.

The issue with ELF weak symbols is that there can only be one of them.
Or rather, if one is normal and one is weak, you get the normal one
and the weak one is forgotten, but in at least my case you cannot know
until runtime which you need.

Though I forgot that this is still an issue w/ glXGetGPA, see below.

> > We *still* have to support systems that do not have dlsym =(
>
> I'm curious, how can such a system exist?  Surely a system with
> dynamic linking must have something like dlsym (perhaps with another
> name, like GetProcAddress in Windows).

These systems don't have dynamic linking.

The architecture is getting popular on very large scale supercomputers,
particularly those being sold by IBM.  There are rumors that there's
some technical reason why a 'real' operating system doesn't work on
so many nodes.  Personally I'd rather just drop support for such
systems...

Anyway what you get is a 'real' head node, and then on all the compute
nodes you basically just have a loader.

> And if dynamic linking is not supported, then you'll be limited to a
> single GL implementation and thus you don't need dlsym since you know
> exactly what is available

Technically, one could still use two because of Mesa's support for name
mangling. e.g.:

  typedef void*(*GPA)(const GLubyte*);
  static GPA __glewXGetProcAddress = NULL;
  if(mesa) {
    __glewXGetProcAddress = mglXGetProcAddress;
  } else {
    __glewXGetProcAddress = glXGetProcAddress;
  }

  __glewGetString = __glewXGetProcAddress("glGetString"); /* ish */

> (and it's better to fail at build time in this case).

yes, that's back to my original point w/ exporting the bare minimum: I
*want* to know that there's no possibility of using a function which
might not be exported -- at build time.  The alternative is exhaustive
testing on all platforms I support, which is technically possible --
the best kind of possible -- but also quite challenging.
Comment 22 Luca Barbieri 2010-09-16 22:46:17 UTC
> The issue with ELF weak symbols is that there can only be one of them.
> Or rather, if one is normal and one is weak, you get the normal one
> and the weak one is forgotten, but in at least my case you cannot know
> until runtime which you need.

Not sure what you mean.
What I'm proposing (mostly theoretically) is to add assembly/compiler directives to the headers that cause application references to be weak.
If a symbol reference is weak, then it gets set to NULL if it cannot be resolved to a symbol, rather than causing a fatal error.

Thus, you could then liberally reference functions directly, as long as you check that the extension is supported before actually calling them.

> yes, that's back to my original point w/ exporting the bare minimum: I
> *want* to know that there's no possibility of using a function which
> might not be exported -- at build time.

This is solvable by exporting the functions from libGL.so.1, but not from libGL.so, since "ld" reads the latter, while "ld.so" reads the former.

To do this, Mesa would need to link libGL twice with two different linker version scripts, and ship both, instead of making libGL.so a symlink to libGL.so.1.

It's also probably possible to strip the section bodies from such a libGL.so to significantly reduce the size.

A developer who instead prefers to have an increased chance of compiling random example code could just remove libGL.so and run ldconfig.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.