Bug 75524

Summary: Shared Memory queuing up in xlib backend for slow X Server 1.9.5
Product: cairo Reporter: kasberger
Component: xlib backendAssignee: Chris Wilson <chris>
Status: RESOLVED MOVED QA Contact: cairo-bugs mailing list <cairo-bugs>
Severity: normal    
Priority: medium    
Version: 1.12.16   
Hardware: x86 (IA32)   
OS: Linux (All)   
Whiteboard:
i915 platform: i915 features:

Description kasberger 2014-02-26 11:46:20 UTC
I let run the gvncviewer (from library gtk-vnc) in fullscreen mode.
Under heavy load e.g. changing the complete screen content on vnc server side every 0.01 secs gvncviewer starts queueing up memory in SHM. (See bottom of the mail)
Queueing up means more and more shared memory segments are allocated by cairo

The problem lies in function _cairo_xlib_shm_info_create
It tries to find a empty space in current shm pool.
If this not happens it simply create a new shm segment.
This happens all the time so on my slow computer it accumulates approx 1 GB in 10 minutes

I assume if the X Server is too slow to get managed with a work load it blocks the shared memory segments until it is done. But the xserver is much slower than the cairo backend produces the load the shared memory is growing until out of mem.

For verification i just added a sync(display) call before searching in shm pool and the problem is gone.

Here a shortened example of pmap output of the process after running a while

08048000      20K r-xp  /usr/bin/gvncviewer
0804d000       4K rw-p  /usr/bin/gvncviewer
0804e000     656K rw-p  [heap]
7c8e6000     196K rw-p    [ anon ]
7c917000   32768K rw-s  /SYSV00000000
7e917000     196K rw-p    [ anon ]
7e948000   32768K rw-s  /SYSV00000000
80948000     196K rw-p    [ anon ]
80979000   32768K rw-s  /SYSV00000000
82979000     196K rw-p    [ anon ]
829aa000   32768K rw-s  /SYSV00000000
849aa000     196K rw-p    [ anon ]
849db000   32768K rw-s  /SYSV00000000
869db000     196K rw-p    [ anon ]
86a0c000   32768K rw-s  /SYSV00000000
88a0c000     196K rw-p    [ anon ]
88a3d000   32768K rw-s  /SYSV00000000
8aa3d000     196K rw-p    [ anon ]
8aa6e000   32768K rw-s  /SYSV00000000
8ca6e000     196K rw-p    [ anon ]
8ca9f000   32768K rw-s  /SYSV00000000
8ea9f000     196K rw-p    [ anon ]
8ead0000   32768K rw-s  /SYSV00000000
90ad0000     196K rw-p    [ anon ]
90b01000   32768K rw-s  /SYSV00000000
92b01000     196K rw-p    [ anon ]
92b32000   32768K rw-s  /SYSV00000000
94b32000     196K rw-p    [ anon ]
94b63000   32768K rw-s  /SYSV00000000
96b63000     196K rw-p    [ anon ]
96b94000   32768K rw-s  /SYSV00000000
98b94000     196K rw-p    [ anon ]
98bc5000   32768K rw-s  /SYSV00000000
9abc5000     196K rw-p    [ anon ]
9abf6000   32768K rw-s  /SYSV00000000
9cbf6000     196K rw-p    [ anon ]
9cc27000   32768K rw-s  /SYSV00000000
9ec27000     196K rw-p    [ anon ]
9ec58000   32768K rw-s  /SYSV00000000
a0c58000     196K rw-p    [ anon ]
a0c89000   32768K rw-s  /SYSV00000000
a2c89000     196K rw-p    [ anon ]
a2cba000   32768K rw-s  /SYSV00000000
a4cba000     196K rw-p    [ anon ]
a4ceb000   32768K rw-s  /SYSV00000000
a6ceb000     196K rw-p    [ anon ]
a6d1c000   32768K rw-s  /SYSV00000000
a8d1c000     196K rw-p    [ anon ]
a8d4d000   32768K rw-s  /SYSV00000000
aad4d000     196K rw-p    [ anon ]
aad7e000   32768K rw-s  /SYSV00000000
acd7e000     196K rw-p    [ anon ]
acdaf000   32768K rw-s  /SYSV00000000
aedaf000     196K rw-p    [ anon ]
aede0000   32768K rw-s  /SYSV00000000
b0de0000     196K rw-p    [ anon ]
b0e11000   32768K rw-s  /SYSV00000000
b2e11000    5128K rw-p    [ anon ]
......
.....
..
.
Comment 1 Chris Wilson 2014-02-26 11:51:25 UTC
Why is the client running unthrottled?
Comment 2 kasberger 2014-02-26 11:56:33 UTC
I don't know how I can throttle the vnc client. It is getting a new screen from vnc server for displaying it and it is doing it.
Sorry maybe I have missed the point of your question at all.
Comment 3 kasberger 2014-02-26 12:44:38 UTC
Oh forget it using cairo_scale what causes the heavy computations on an old Intel Atom with EMGD (PowerVR) and poorly supported driver from Intel
Comment 4 kasberger 2014-03-20 09:52:18 UTC
Ok I think if nobody feels responsible for this problem it is better to close this bug.

I have no possibility to see from application that cairo xlib backend is flooding th xserver. 
a) Cairo claims it is not responsibility of cairo if XServer is too slow 
b) and Xserver cannot change anything if data are coming faster as Xserver can work on it 

I have just added a patch in my xlib backend that will always sync with Xserver if the 32 MB shared memory page is full but this is not a good solution just a workaround
Comment 5 Chris Wilson 2014-03-20 09:59:25 UTC
My point is that the client is pushing data faster than the Xserver can render it, so the client is building up a massive output latency. It tends to cause very annoying lag. However, how to close that feedback loop is not obvious and may require an extra interface in cairo (something like cairo_device_throttle).
Comment 6 kasberger 2014-03-25 15:20:05 UTC
(In reply to comment #5)
> My point is that the client is pushing data faster than the Xserver can
> render it, so the client is building up a massive output latency. It tends
> to cause very annoying lag. However, how to close that feedback loop is not
> obvious and may require an extra interface in cairo (something like
> cairo_device_throttle).

Yes, I agree. But if a client should do active throtteling means the client needs two things
a) intelligence when throtteling is useful. How to measure the speed of a xserver?
b) intelligence how to throttle down

What about a kind of callback each time called when the cairo xlib backend allocates/deallocate a shared memory arena? For the client it would be a good hint there is something strange with xserver
Or can you think for other measuremnt parameters indicating a a xserver load ?

Oh I think I am too directly attached to my problem. 
What about cairo_device_throttle_strategy(..) e.g. with NONE (=old behavior), COWARD (=try to get in sync with xserver every time), BEST (=sync if really needed) ?

Maybe second solution would be much better. What do you think?
Comment 7 GitLab Migration User 2018-08-25 13:49:30 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/cairo/cairo/issues/212.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.