hello spice-gtk connect to server, guest os is win7, media player relay vedio (fullscreen), for a long time test. spice-gtk out of memory cmd: G_SLICE='alawys-malloc' spicy Test env: clinet: arm linux + spice-gtk (0.22 and 0.29, pulse) guest os: win7 server: centos 6.5 + spice 0.12.4 Test 1. win7 media player replay vedio (small window), spice-gtk use memory normal 2. stop audio pulse, Then win7 media player replay vedio fullscreen. OK 3. media player relay vedio (fullscreen), for a long time test. spice-gtk proces out of memory. (it was killed by kernel) 4. spice-gtk with gstreamer audio, Test result same as spice-gth with pulse
*** Bug 93496 has been marked as a duplicate of this bug. ***
can you run valgrind with memcheck on the client side to see what is leaking? if that does not work, valgrind with massif for a good period could also be helpful
Thank you for reply. i have found that, display data was handle by display_handle_stream_data, when CPU in high load, data was save in st->msgq, after long time run, the queue is very long. The system has been run out of memory.
(In reply to linp.lin from comment #3) > Thank you for reply. > i have found that, display data was handle by display_handle_stream_data, > when CPU in high load, data was save in st->msgq, after long time run, the > queue is very long. The system has been run out of memory. Right! I can see that the memory will grow on high CPU usage (which can happen easy with streaming, I guess). The memory of st->msgq is released on the coroutine context... As I don't have an ARM to test the debug information and the valgrind output will be useful anyway. Please attach the output of spicy with --spice-debug so I can see how everything is being handled in your environment
1. valgind can't run on my arm board, because in my env, it miss lib6-dbg. 2. function display_handle_stream_data add code " CHANNEL_DEBUG(channel, "st->msgq len=%d", g_queue_get_length(st->msgq));" a. run spicy on armv7 b. VDI medieplay replay vedio c. use stree --cpu 10 --io 4 increase Client CPU load d. change spice-gtk win from small win to fullscreen 3. after a period of time, spice-gtk was killed by linux kernel a. st->msgq was grew, up to st->msgq len=1160 b. scheduling next stream render in 156384 ms c. playback set_delay 23835 ms 4. can you tell me audio adn vedio how to sync?
Created attachment 121152 [details] spice-debug log
1. display_handle_stream_data:limit st->msgq len + /* st->msgq is full and clear queue */ + if (g_queue_get_length(st->msgq) >= MAX_HARD_MSGQ_LEN) { + CHANNEL_DEBUG(channel, "st->msgq=%d and clear st->msgq", g_queue_get_length(st->msgq)); + g_queue_foreach(st->msgq, _msg_in_unref_func, NULL); + g_queue_clear(st->msgq); + if (st->timeout != 0) { + g_source_remove(st->timeout); + st->timeout = 0; + } + } 2. use hardware decoder(hisi chip) decode jpeg,it modify "stream_mjpeg_data" func it can make cpu load down 70-80%.
(In reply to linp.lin from comment #7) > 1. display_handle_stream_data:limit st->msgq len > + /* st->msgq is full and clear queue */ > + if (g_queue_get_length(st->msgq) >= MAX_HARD_MSGQ_LEN) { > + CHANNEL_DEBUG(channel, "st->msgq=%d and clear st->msgq", > g_queue_get_length(st->msgq)); > + g_queue_foreach(st->msgq, _msg_in_unref_func, NULL); > + g_queue_clear(st->msgq); > + if (st->timeout != 0) { > + g_source_remove(st->timeout); > + st->timeout = 0; > + } > + } > 2. use hardware decoder(hisi chip) decode jpeg,it modify "stream_mjpeg_data" > func > it can make cpu load down 70-80%. Great that it solved your issue. I think the patch would reviewed and improved and land upstream. Could you either send the patch to the mailing list [1] or just attach the format-patch here [0] [0] git format-patch -1 [1] git send-email *.patch --to=spice-devel@lists.freedesktop.org Thanks again!
> I think the patch would reviewed and improved and land upstream. I mean that this patch could go upstream after proper review and possible improvements :)
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.