Bug 47390

Summary: module-null-sink keeps low latency when not needed
Product: PulseAudio Reporter: Tanu Kaskinen <tanuk>
Component: modulesAssignee: pulseaudio-bugs
Status: RESOLVED MOVED QA Contact: pulseaudio-bugs
Severity: normal    
Priority: medium CC: lennart
Version: unspecified   
Hardware: Other   
OS: All   
Whiteboard:
i915 platform: i915 features:

Description Tanu Kaskinen 2012-03-15 20:40:50 UTC
Copied from http://lists.freedesktop.org/archives/pulseaudio-discuss/2012-January/012729.html

Sean McNamara writes:

"Hi all,

First, my environment:

* 2nd-gen Nehalem quad core dedicated server
* Debian Testing (x86_64)
* PulseAudio 1.1 (tarball from website)
* OpenVZ container

Relevant PulseAudio settings:
* speex-float-2
* NO physical soundcard; just module-null-sink
* Flat-volumes enabled
* Default latency: 100 ms

Use case: Basically I'm looking for an inexpensive (CPU-wise) software
mixer for two streams on the local box, with as low latency as I can
get without making the CPU usage too high... in other words, a
pareto-optimal setup or as close as I can get.

"Stream A" is Mumble client with playback and capture streams (native
PA protocol over shm). "Stream B" is a gst-launch pipeline with only a
capture stream, using pulsesrc. All capture and playback streams are
taken out against one module-null-sink and its monitor, the default
sink and source respectively.

Defective behavior:

1. Start PulseAudio and observe CPU usage. PA daemon is using 0% CPU
because of module-suspend-on-idle.
2. Start Stream B and observe CPU usage. PA daemon and client are both
using 8% CPU according to top.
3. Start Stream A and observe CPU usage. Stream A client is configured
to request a latency of 10 ms. PA daemon and each client jump to 25%
cpu usage apiece.
4. Kill Stream A process with a SIGTERM, then wait a few seconds and
observe CPU usage. PA daemon and Stream B are still using 25% CPU
apiece.

Expected results:
(1) When Stream A is killed, PA will realize that its
lowest-requested-latency client has disconnected, and it will tell the
remaining client(s) to go back to either the default latency, or the
next highest requested latency in the chain.
(2) On Step 3 of the defective behavior steps, the CPU usage of Stream
B should not increase. It should be possible to have one client
causing a lot of CPU activity due to a low latency request, while
allowing a client that doesn't need low latency to send buffers in
larger, more infrequent chunks for better efficiency.

My understanding is that time-based scheduling is designed to handle
both (1) and (2) of the expected results. I haven't tested this
particular setup with a hardware module-alsa-sink on a local box, but
I have a PA daemon locally with an uptime of several weeks that is
only using 0.5% CPU while playing a Rhythmbox stream, and this daemon
has had clients of all kinds of different latencies (Adobe Flash,
Mumble, etc) connect to it over time.

So my conclusion is that either
(A) time-based scheduling isn't implemented for module-null-sink, or
(B) there is some bug causing this strange behavior.

In case (A), would it be possible, even in principle, to implement it?
In case (B), is this a bug that anyone can look into? Can provide as
much additional info as required.

Maybe there's some other third possibility, but I'm just not expecting
this kind of behavior out of PA. I thought all the tsched work was to
help to juggle latency-intensive streams simultaneously with
high-latency streams without impacting the latter's CPU usage?

Thanks,

Sean"
Comment 1 GitLab Migration User 2018-07-30 09:41:50 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/pulseaudio/pulseaudio/issues/100.

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.