Bug 93570 - the image of llvmpipe has a low quality on arm (with too many points on it)
Summary: the image of llvmpipe has a low quality on arm (with too many points on it)
Status: RESOLVED FIXED
Alias: None
Product: Mesa
Classification: Unclassified
Component: Drivers/X11 (show other bugs)
Version: 11.0
Hardware: ARM Linux (All)
: medium normal
Assignee: mesa-dev
QA Contact: mesa-dev
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-01-03 17:30 UTC by Icenowy Zheng
Modified: 2016-01-08 01:26 UTC (History)
2 users (show)

See Also:
i915 platform:
i915 features:


Attachments
The result on arm, the points are shown (3.53 MB, image/jpeg)
2016-01-03 17:30 UTC, Icenowy Zheng
Details
attachment-32059-0.html (1.72 KB, text/html)
2016-01-04 16:34 UTC, Icenowy Zheng
Details

Description Icenowy Zheng 2016-01-03 17:30:41 UTC
Created attachment 120782 [details]
The result on arm, the points are shown

I've built a llvmpipe-enabled mesa on my Allwinner A33 (Quad Cortex-A7) device.

I tried to run glmark2 on it, but I found that the image has a very very low quality.

On my laptop (i5-3230m), when I forced LIBGL_ALWAYS_SOFTWARE, there's no this problem.
Comment 1 Icenowy Zheng 2016-01-03 17:32:08 UTC
The attachment is a jpg, shoot by my mobile phone
Comment 2 Icenowy Zheng 2016-01-03 17:34:58 UTC
icenowy [ ~ ] ! glmark2 -b build
libGL error: unable to load driver: mali_drm_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: mali_drm
** GLX does not support GLX_EXT_swap_control or GLX_MESA_swap_control!
** Failed to set swap interval. Results may be bounded above by refresh rate.
=======================================================
    glmark2 2014.03
=======================================================
    OpenGL Information
    GL_VENDOR:     VMware, Inc.
    GL_RENDERER:   Gallium 0.4 on llvmpipe (LLVM 3.7, 128 bits)
    GL_VERSION:    3.0 Mesa 11.1.0
=======================================================
** GLX does not support GLX_EXT_swap_control or GLX_MESA_swap_control!
** Failed to set swap interval. Results may be bounded above by refresh rate.
[build] <default>: FPS: 13 FrameTime: 76.923 ms
=======================================================
                                  glmark2 Score: 13 
=======================================================


Here's some log
Comment 3 Ilia Mirkin 2016-01-03 18:14:46 UTC
A fairer comparison would be

LIBGL_ALWAYS_SOFTWARE=1 LP_NATIVE_VECTOR_WIDTH=128

Since I assume it will otherwise use 256-bit wide vectors on your intel CPU. But it appears that the rendering is actually incorrect there. My personal guess is that llvm on arm has some issues, but this is based purely on the fact that you're seeing misrendering :)

It may be worthwhile to build mesa-git and llvm-svn and see if the issue persists. (mesa-git should be able to build against the llvm head, while released mesa probably won't.)
Comment 4 Icenowy Zheng 2016-01-03 18:35:19 UTC
Of course I am not benchmarking.
But the image *do* misrenders.

It will take me days of time to build a llvm and mesa combination, and may fail (as my device has only 512MB RAM)
Comment 5 Roland Scheidegger 2016-01-03 19:37:04 UTC
This looks rather interesting, like a shuffle gone wrong (always affects the same 3 pixel in 4x4 pixel stamp). This chip does have NEON instructions right? I think llvm used to have quite some problems if it needed to lower all the vector code to scalars (not to mention the horrific performance).
Theoretically llvmpipe should work pretty ok on arm (albeit there's no arm specific optimizations, so slower than possible) but there were spurious reports of it not working well earlier (not many people try, it seems).
It could also well be a llvm bug, I'd suggest a newer version if you're not already using the latest.
Comment 6 Icenowy Zheng 2016-01-04 03:37:31 UTC
My CPU (Cortex-A7) *do* has neon. All A7 cores have neon.

I used llvm-3.7, and I'm now building llvm-svn (used https://github.com/llvm-mirror/llvm)

Note: you can run a glmark2 and compare the image to the one that I uploaded.
Comment 7 Icenowy Zheng 2016-01-04 07:37:34 UTC
In addition, my DDX driver is https://github.com/ssvb/xf86-video-fbturbo, which uses neon to accelerate operations such as bitblit .
Comment 8 Icenowy Zheng 2016-01-04 07:40:18 UTC
It seems that the answer to the question above is "No". I changed to the original fbdev DDX, and the problem is still here?

Is there any way to dump the binary code generate by llvm?
Comment 9 Icenowy Zheng 2016-01-04 11:45:56 UTC
I'm sorry, but my device is not capable to build a svn version of LLVM. (The original version of LLVM (3.7) is built on a buildbot, so does Mesa 11.0)

Can you provide me a simple testsuite to check whether it's the fault of llvm?
Comment 10 Icenowy Zheng 2016-01-04 15:52:44 UTC
I've finally built a git version of mesa with a svn version of llvm.

Now the image is fixed, but the performace is 1/4 to the original.
Comment 11 Rob Clark 2016-01-04 16:22:48 UTC
(In reply to Icenowy Zheng from comment #10)
> I've finally built a git version of mesa with a svn version of llvm.
> 
> Now the image is fixed, but the performace is 1/4 to the original.

just a sanity check, but you still see "llvmpipe" in the GL_RENDERER string print out from glmark2?  Ie. something like:

  GL_RENDERER:   Gallium 0.4 on llvmpipe (LLVM 3.7, 128 bits)

just to double check to make sure you aren't falling back to swrast..
Comment 12 Icenowy Zheng 2016-01-04 16:34:32 UTC
Created attachment 120797 [details]
attachment-32059-0.html

Yes. And it's still faster than softpipe.

It seems that loading a scene cost too much time, as when the animation started, it's smooth. (but the result is low fps number)

---- bugzilla-daemon@freedesktop.org编写 ----

https://bugs.freedesktop.org/show_bug.cgi?id=93570

--- Comment #11 from Rob Clark <robclark@freedesktop.org> ---
(In reply to Icenowy Zheng from comment #10)
> I've finally built a git version of mesa with a svn version of llvm.
>
> Now the image is fixed, but the performace is 1/4 to the original.

just a sanity check, but you still see "llvmpipe" in the GL_RENDERER string
print out from glmark2?  Ie. something like:

  GL_RENDERER:   Gallium 0.4 on llvmpipe (LLVM 3.7, 128 bits)

just to double check to make sure you aren't falling back to swrast..

--
You are receiving this mail because:
You reported the bug.
Comment 13 Roland Scheidegger 2016-01-08 01:26:43 UTC
I don't know why it would be so much slower with newer llvm, unless somehow with the old version it didn't really calculate all pixels due to the incorrect shuffle. But in any case, glad it's actually working properly with newer llvm.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.