Just tried the latest mesa from git with commit 97217a40f97cdeae0304798b607f704deb0c3558 on an Asus Eeepc 1000H with Intel gen3 graphics. Indeed I now see opengl 2.1 and shading language 1.2 advertised. Yet, this unfortunately completely breaks the Ubuntu's Unity desktop.
I believe that the issue is that the window manager thinks that it can rely on the hardware for some operations (effects). However, these are so painfully slow that the machine appears to hang. For instance, the log out operation is meant to show a log-out panel. When the graphics hardware supports it, the panel fades in. However, this operation now takes about 1.5 minutes and the machine seems to hang. Trying to show the "search your computer" panel is so slow that I had to reboot the machine.
How about reverting this option until desktop managers learn that they should not play effects on gen3 even if it advertises some capabilities? Or at least how about making the opengl level configurable via xorg.conf? Just like Unity, some other desktop managers may he bitten by this issue, as well as other pieces of software.
I did not realize that the opengl conformance level is /already/ configurable via the MESA_GL_VERSION_OVERRIDE environment variable.
Setting it to 1.4 in /etc/environment is actually sufficient to restore the unity desktop functionality on the EEEPC (and - I guess - on machines with < gen4 hardware)
This is good news, because the environment setting is a sufficient hack to make latest mesa work with gen3 hardware in todays DMs.
However, I wonder, why is the advertised opengl 2.1 functionality so unberably slow on gen3? Is mesa rendering on the CPU for this (seems so, as the cpu load goes extremely high when unity tries to open the logout panel)? Is it sensible to do so by default, since machines with gen3 cannot be expected to have strong CPUs?
ubuntu has 9.2 now, and it's indeed utterly broken for gen3:
got a word from a unity dev about what's happening with opengl 2.x
"Yes, the non-working codepath invokes a much more heavyweight Gaussian blur convolution function. The working codepath also downscales the image before blurring.
An example of the shader code for the broken pass is as follows.
"varying vec4 v_tex_coord;
uniform sampler2D tex_object;
uniform vec2 tex_size;
#define NUM_SAMPLES %d
uniform float weights[NUM_SAMPLES];
uniform float offsets[NUM_SAMPLES];
vec3 acc = texture2D(tex_object, v_tex_coord.st).rgb*weights;
for (int i = 1; i < NUM_SAMPLES; i++)
acc += texture2D(tex_object, (v_tex_coord.st+(vec2(0.0, offsets[i])/tex_size))).rgb*weights[i];
acc += texture2D(tex_object, (v_tex_coord.st-(vec2(0.0, offsets[i])/tex_size))).rgb*weights[i];
gl_FragColor = vec4(acc, 1.0);
The NUM_SAMPLES in this case is 10."
Can you try the i915 gallium driver? Is it better in this regard?
fixing the math didn't fix the issue, it's still slow.
we don't build i915g so no idea.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct.