Summary: | Big glamor regression in Xorg server 1.6.99.1 GIT: x11perf 1.5 Test: PutImage XY 500x500 Square | ||
---|---|---|---|
Product: | xorg | Reporter: | darkbasic <darkbasic> |
Component: | Server/Acceleration/glamor | Assignee: | Xorg Project Team <xorg-team> |
Status: | RESOLVED FIXED | QA Contact: | Xorg Project Team <xorg-team> |
Severity: | normal | ||
Priority: | medium | ||
Version: | git | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: |
Description
darkbasic
2014-09-22 11:36:27 UTC
It is radeon specific: Keith Packard cannot reproduce it. Do you know of any real world applications using this functionality? Can you bisect? Yes, I will try to bisect it. I wasted hours bisecting the xorg server without success and then hours bisecting mesa but this damn bug is in the kernel. I will try to bisect kernel too, damn I fucking hate bisecting kernel. 3.14.3 gives 61.7/sec in PutImage XY 500x500 square while drm-next-3.18 only 0.6. Unfortunately I will not be able to bisect the kernel until at least next week. No need to bisect, I know what the problem is. Awesome, thanks. I just pushed the Mesa side of the fix to Git master, see below. The glamor side of the fix has been reviewed but not applied yet. Module: Mesa Branch: master Commit: 7e55c3b352b6616fa2780f683dd6c8e1a3f61815 URL: http://cgit.freedesktop.org/mesa/mesa/commit/?id=7e55c3b352b6616fa2780f683dd6c8e1a3f61815 Author: Michel Dänzer <michel.daenzer@amd.com> Date: Thu Sep 25 15:29:56 2014 +0900 st/mesa: Use PIPE_USAGE_STAGING for GL_STATIC/DYNAMIC/STREAM_READ buffers Such buffers can only be useful by reading from them with the CPU, so we need to make sure CPU reads are fast. commit d3d845ca9e92f0a2ccde93f4242d7769cfe14164 Author: Michel Dänzer <michel.daenzer@amd.com> Date: Thu Sep 25 15:27:22 2014 +0900 glamor: Use GL_STREAM_READ also for read/write access to a PBO Otherwise the CPU may end up reading from non-cacheable memory, which is very slow. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.