Created attachment 80279 [details] valgrind massif trace of memory usage of a very simple test program so of the 50m or so of heap i have... meas seems to use about 40m or so i really can't justify/see a good reason for using. textures used are fairly minimal so it's not all textures. it's mostly glsl compiler, contexts and vector (vertex etc. buffers) it looks like... really... about 800kb or so i can see is textures. http://www.enlightenment.org/ss/e-51adc362cf4043.11569639.png full valgrind massif trace attached. this REALLY bloats out the memory footprint of both gl client apps and compositors to a totally insane extent. so first: 1. mesa doesn't seem to support releasing of the glsl compiler (extension exists, and evas supports glReleaseShaderCompiler() if it exists and is there, but it seemsingly is not supported, so if it supported it.. we could nuke 25m of memory. if glGetProgramBinary() was also supported at least the shader binary cache evas has would kick in and avoid bringing up the glsl compiler and just dump some binaries from disk into gl - so this would be another way to improve things memory-wise and startup-time-wise 2. contexts seem to eat up about 16m of ram each. EACH. create 2 or 3 and it's a new 16m per context. this is probably 15-15.5m more than i'd expect/want. if a context needs more memory.. then allocate when/if needed, and when no longer needed, release it. sorry if mesa version is old. this isn't really a SPECIFIC bug, but more of a general "mesa is fat and needs to go on a serious diet" bug. :)
Created attachment 80351 [details] newer mesa 9.1.2 memroy dump still uses a vast amount of memory even with 9.1.2.
and on a newer mesa 9.1.2 - also still uses huge amounts of memory. http://www.enlightenment.org/ss/e-51af4c975faac9.57775536.png
With the latest Mesa, OpenGL ES 2.0/3.0 won't create a swrast context, eliminating the large red block in your graph. I measured 18MB of savings in one program. OpenGL 3.1 (the non-legacy version) also skips this, but it's unfortunately still necessary for legacy desktop GL.
I'm scratching my head, wondering why it would be required to alloc this blob for desktop gl? is it for software fallbacks of some kind not supported by mesa hw accel infra? why would it need it? i'm really curious. if it needs memory for sw fallbacks, can't it alloc on-demand? (ie when operation requires such memory... and 18m is a vast chunk of memory!). if its a few 10's of kb... sure - i can see why no one bothered, but this order of magnitude of memory usage should be allocated when needed then freed again when not needed... ? if i know what it is that triggers the need then i know what to avoid in the api so the alloc never happens.
The swrast context is a kitchen sink for all of the fixed-function "stuff." This is both software fallbacks and fixed-function TNL. The problem with on-demand allocation is that I don't think we'd know that we need to allocate it until draw-time... which would cause a rendering hiccup. Almost every application that's using a classic GL context will hit one of these paths at some point. :( I'm not saying we can't do anything. I'm just saying it will be a lot of work, and it's really low priority for us.
well if the hiccup happens only for the first frame that needs it... then does it much matter? is there anything i can do to convince mesa not to eat up all this memory? my usage is entirely glsl - no fixed function usage at all. in fact the codebase is explicitly designed around using the glesv2 "subset" of desktop gl (yes i know its not a proper subset - there are a few exceptions in dark corners) as it actually just switches between glx/gl and glesv2 depending on build mode. i know you can say "then just use glesv2" - problem is that 99% of the time this simply isn't even there, though it is an option to build mesa that way. what i'd love, nay need, is some gl context incantation i can say that says "away with ye demons of evil memory allocation. i need ye not!" (and then some crazy hand waving, flashes of light and puffs of smoke). :)
(In reply to comment #6) > well if the hiccup happens only for the first frame that needs it... then > does it much matter? Perhaps not. We do get complaints about that sort of thing, but it's usually more the state-based shader recompile hiccups. We just don't have anyone to do the work. > is there anything i can do to convince mesa not to eat up all this memory? > my usage is entirely glsl - no fixed function usage at all. in fact the > codebase is explicitly designed around using the glesv2 "subset" of desktop > gl (yes i know its not a proper subset - there are a few exceptions in dark > corners) as it actually just switches between glx/gl and glesv2 depending on > build mode. i know you can say "then just use glesv2" - problem is that 99% > of the time this simply isn't even there, though it is an option to build > mesa that way. > > what i'd love, nay need, is some gl context incantation i can say that says > "away with ye demons of evil memory allocation. i need ye not!" (and then > some crazy hand waving, flashes of light and puffs of smoke). :) There is. Ask for a 3.1 context or a 3.0 "forward compatible" context. That will give you what you want on all the drivers that support at least OpenGL 3.0. That same mechanism will also work on closed-source drivers, so you won't have any Mesa specific code to worry about maintaining.
*** This bug has been marked as a duplicate of bug 56090 ***
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.