Created attachment 120782 [details] The result on arm, the points are shown I've built a llvmpipe-enabled mesa on my Allwinner A33 (Quad Cortex-A7) device. I tried to run glmark2 on it, but I found that the image has a very very low quality. On my laptop (i5-3230m), when I forced LIBGL_ALWAYS_SOFTWARE, there's no this problem.
The attachment is a jpg, shoot by my mobile phone
icenowy [ ~ ] ! glmark2 -b build libGL error: unable to load driver: mali_drm_dri.so libGL error: driver pointer missing libGL error: failed to load driver: mali_drm ** GLX does not support GLX_EXT_swap_control or GLX_MESA_swap_control! ** Failed to set swap interval. Results may be bounded above by refresh rate. ======================================================= glmark2 2014.03 ======================================================= OpenGL Information GL_VENDOR: VMware, Inc. GL_RENDERER: Gallium 0.4 on llvmpipe (LLVM 3.7, 128 bits) GL_VERSION: 3.0 Mesa 11.1.0 ======================================================= ** GLX does not support GLX_EXT_swap_control or GLX_MESA_swap_control! ** Failed to set swap interval. Results may be bounded above by refresh rate. [build] <default>: FPS: 13 FrameTime: 76.923 ms ======================================================= glmark2 Score: 13 ======================================================= Here's some log
A fairer comparison would be LIBGL_ALWAYS_SOFTWARE=1 LP_NATIVE_VECTOR_WIDTH=128 Since I assume it will otherwise use 256-bit wide vectors on your intel CPU. But it appears that the rendering is actually incorrect there. My personal guess is that llvm on arm has some issues, but this is based purely on the fact that you're seeing misrendering :) It may be worthwhile to build mesa-git and llvm-svn and see if the issue persists. (mesa-git should be able to build against the llvm head, while released mesa probably won't.)
Of course I am not benchmarking. But the image *do* misrenders. It will take me days of time to build a llvm and mesa combination, and may fail (as my device has only 512MB RAM)
This looks rather interesting, like a shuffle gone wrong (always affects the same 3 pixel in 4x4 pixel stamp). This chip does have NEON instructions right? I think llvm used to have quite some problems if it needed to lower all the vector code to scalars (not to mention the horrific performance). Theoretically llvmpipe should work pretty ok on arm (albeit there's no arm specific optimizations, so slower than possible) but there were spurious reports of it not working well earlier (not many people try, it seems). It could also well be a llvm bug, I'd suggest a newer version if you're not already using the latest.
My CPU (Cortex-A7) *do* has neon. All A7 cores have neon. I used llvm-3.7, and I'm now building llvm-svn (used https://github.com/llvm-mirror/llvm) Note: you can run a glmark2 and compare the image to the one that I uploaded.
In addition, my DDX driver is https://github.com/ssvb/xf86-video-fbturbo, which uses neon to accelerate operations such as bitblit .
It seems that the answer to the question above is "No". I changed to the original fbdev DDX, and the problem is still here? Is there any way to dump the binary code generate by llvm?
I'm sorry, but my device is not capable to build a svn version of LLVM. (The original version of LLVM (3.7) is built on a buildbot, so does Mesa 11.0) Can you provide me a simple testsuite to check whether it's the fault of llvm?
I've finally built a git version of mesa with a svn version of llvm. Now the image is fixed, but the performace is 1/4 to the original.
(In reply to Icenowy Zheng from comment #10) > I've finally built a git version of mesa with a svn version of llvm. > > Now the image is fixed, but the performace is 1/4 to the original. just a sanity check, but you still see "llvmpipe" in the GL_RENDERER string print out from glmark2? Ie. something like: GL_RENDERER: Gallium 0.4 on llvmpipe (LLVM 3.7, 128 bits) just to double check to make sure you aren't falling back to swrast..
Created attachment 120797 [details] attachment-32059-0.html Yes. And it's still faster than softpipe. It seems that loading a scene cost too much time, as when the animation started, it's smooth. (but the result is low fps number) ---- bugzilla-daemon@freedesktop.org编写 ---- https://bugs.freedesktop.org/show_bug.cgi?id=93570 --- Comment #11 from Rob Clark <robclark@freedesktop.org> --- (In reply to Icenowy Zheng from comment #10) > I've finally built a git version of mesa with a svn version of llvm. > > Now the image is fixed, but the performace is 1/4 to the original. just a sanity check, but you still see "llvmpipe" in the GL_RENDERER string print out from glmark2? Ie. something like: GL_RENDERER: Gallium 0.4 on llvmpipe (LLVM 3.7, 128 bits) just to double check to make sure you aren't falling back to swrast.. -- You are receiving this mail because: You reported the bug.
I don't know why it would be so much slower with newer llvm, unless somehow with the old version it didn't really calculate all pixels due to the incorrect shuffle. But in any case, glad it's actually working properly with newer llvm.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.