Summary: | Implement opus audio compression | ||
---|---|---|---|
Product: | PulseAudio | Reporter: | Jonas Heinrich <onny> |
Component: | core | Assignee: | pulseaudio-bugs |
Status: | RESOLVED MOVED | QA Contact: | pulseaudio-bugs |
Severity: | enhancement | ||
Priority: | medium | CC: | antonis+freedesktop.org, dev.rindeal+bugs.freedesktop.org, gerrit, ht990332, lennart, mabo, mail |
Version: | unspecified | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: |
patch that adds opus compression to protocol-native/module-tunnel-sink-new
Same as opus patch, with extra patches to fix module-combine-sink with network sinks. |
Description
Jonas Heinrich
2012-11-11 19:59:31 UTC
Thanks for the suggestion. This was actually already in the plans in the sense that "we want this" (but good that it's now in Bugzilla too). Unfortunately, I think nobody has plans to do this in the near future. Patches welcome! Opus compression would be great :) Any news? No news. Speex website suggests Opus. This would be a nice addition. Created attachment 117144 [details] [review] patch that adds opus compression to protocol-native/module-tunnel-sink-new here is a patch that adds opus compression to protocol-native/module-tunnel-sink-new steps to use it: 1. apply patch on client and server (make sure libopus/libopus-dev is installed), compile 2. add client ip to access list of server (load-module module-native-protocol-tcp auth-ip-acl=CLIENTIP) 3. setup new tunnel on the client (load-module module-tunnel-sink-new sink_name=tunnel server=tcp:SERVERIP:4713 sink="alsa_output.default" compression="opus") 4. select tunnel as your sink 5. enjoy audio compression I'm quite against the idea of having codec support in PulseAudio itself. In my opinion, the right way to do this is to first move our RTP support to use GStreamer under the hood, and then potentially use that to do encoding if needed. Created attachment 121428 [details] [review] Same as opus patch, with extra patches to fix module-combine-sink with network sinks. origina patch works great, I found that setting the compression-frame_size=960 after compression="opus" helped with streaming video audio glitches, due to network not being able to handle it. I have also worked this patch(attached) into another patch, that adds rtpoll support to module-tunnel-sink-new, and module-tunnel-source-new, and mainloop support to rtpoll, so you can now set up multiple servers, and combine them into one sink, without crashing pulseaudio. Credit for original patches: Tanu Kaskinen, and Gerrit Wyen, and of coarse me(Gavin_Darkglider), for manually applying, and testing all of these patches against pulseaudio 8, but I would assume with a few offset changes it would also patch 7.1. Hi, I tried to compile this patch on my Raspberry PI (Raspbian Jessie). If I try to load the tunnel module, I get the following error message: symbol pa_stream_write_compressed, version PULSE_0 not defined in file libpulse.so.0 with link time reference What`s going wrong there? Hi,
I have tried the module and it is working fine.
Question, is it possible to configure also the input source?
I tried the obvious one like:
pacmd
>>> load-module module-tunnel-source-new source_name=my_tunnel server=tcp:127.0.0.1:7100 source="alsa_input.pci-0000_00_1f.3.analog-stereo" compression="opus"
but it is complaining (without the compression parameter it works fine) :-)
Is this supported in this patch? or is some more work pending to get this working? Or am I am approaching this in the wrong way?
My objective is to get a remote SIP client working.
Great patch :)
(In reply to Arun Raghavan from comment #6) > I'm quite against the idea of having codec support in PulseAudio itself. > > In my opinion, the right way to do this is to first move our RTP support to > use GStreamer under the hood, and then potentially use that to do encoding > if needed. The RTP modules are not useful when talking about a tunnel setup or a direct client-server connection over TCP. Can you clarify, are you against any compressed audio implementation in the native protocol, and if yes, why exactly? There's a new version of the opus patch, and I thought I'd start reviewing it: https://patchwork.freedesktop.org/patch/169038/ (In reply to Tanu Kaskinen from comment #10) > (In reply to Arun Raghavan from comment #6) > > I'm quite against the idea of having codec support in PulseAudio itself. > > > > In my opinion, the right way to do this is to first move our RTP support to > > use GStreamer under the hood, and then potentially use that to do encoding > > if needed. > > The RTP modules are not useful when talking about a tunnel setup or a direct > client-server connection over TCP. Can you clarify, are you against any > compressed audio implementation in the native protocol, and if yes, why > exactly? > > There's a new version of the opus patch, and I thought I'd start reviewing > it: > https://patchwork.freedesktop.org/patch/169038/ I don't think _any_ part of PulseAudio should be talking to specific codecs. The SBC bit for BlueZ is a bit of an aberration (mostly because it's the only mandatory codec in that spec and that is permanent). The reason for this is that today we add Opus support (if this was a few years ago, it might have been Vorbis), then we'll also want FLAC support, and then maybe MP3/AAC. And on embedded platforms, we might want to use h/w acceleration for these, and so on. Basically, this works as a nice hack to get Opus support (which is great), but in terms of long-term maintainability it puts us it either means freezing the protocol on one codec, or a bunch of code talking to a bunch of different codecs. Which is why I think the right thing for us to do for all things in PulseAudio that need codec support is to use an underlying library, and GStreamer imo is a good fit for what we want to do. At some point, it would probably be nice to add GStreamer API to give us something closer to what we want -- API to provide a block of audio and get back a compressed frame -- but this is achievable today anyway. FWIW, if the compression were just limited to being within the tunnel modules, I would have said it might be okay to add this in, since it can always be replaced with something more generic in the future. The patch does add this via public API, though, so that part makes it a no-go from my perspective. (In reply to Arun Raghavan from comment #12) > FWIW, if the compression were just limited to being within the tunnel > modules, I would have said it might be okay to add this in, since it can > always be replaced with something more generic in the future. The patch does > add this via public API, though, so that part makes it a no-go from my > perspective. I don't like the public API changes either, but if we assume that those can be eliminated, are you against adding opus support to the native protocol? As far as I can see, it's not possible to limit this to just the tunnel modules, because they use the native protocol and the other end doesn't have any special handling for tunnels, they are just normal client connections. Even if we were to use GStreamer instead of libopus, I think the native protocol would have to specifically support opus. We can't offload the codec negotiation to GStreamer in the native protocol (in the RTP modules we might be able to do that). So, are you against any compression support in the native protocol or not? (In reply to Tanu Kaskinen from comment #13) [...] > Even if we were to use GStreamer instead of libopus, I think the native > protocol would have to specifically support opus. We can't offload the codec > negotiation to GStreamer in the native protocol (in the RTP modules we might > be able to do that). In the RTP case, I imagine the negotiation would be part of configuration for the module (therefore in PA), and that is okay. It's particularly working with the compressed bitstream (encoding/decoding/parsing) that I think does not belong in PA. > So, are you against any compression support in the native protocol or not? I am not in favour of having encoding/decoding being part of our protocol. This added complexity in the native protocol is not worth the gains for the (imo) relatively uncommon use-case of tunnel modules. I'm not against the native protocol supporting compressed audio. i.e. clients providing compressed audio for devices that support compressed playback. in fact, this is something I would actively like to have, but there are tricky bits to deal with latency reporting, rewinds, etc. That said, if we had this, then the tunnel modules themselves could do the encode/decode. I am curious about your views on this -- do you think this is something we should add to the native protocol, or are you batting for this since the work has been done, or ...? (In reply to Arun Raghavan from comment #14) > (In reply to Tanu Kaskinen from comment #13) > > So, are you against any compression support in the native protocol or not? > > I am not in favour of having encoding/decoding being part of our protocol. > This added complexity in the native protocol is not worth the gains for the > (imo) relatively uncommon use-case of tunnel modules. Ok, so if it was up to you, tunnels would never ever transparently compress the audio that gets sent over the network, because that causes an uncomfortable amount of complexity in the native protocol. > I'm not against the native protocol supporting compressed audio. i.e. > clients providing compressed audio for devices that support compressed > playback. in fact, this is something I would actively like to have, but > there are tricky bits to deal with latency reporting, rewinds, etc. Isn't this already supported? Or do you mean avoiding the IEC61937 wrapping? > That said, if we had this, then the tunnel modules themselves could do the > encode/decode. I don't follow. > I am curious about your views on this -- do you think this is something we > should add to the native protocol, or are you batting for this since the > work has been done, or ...? In my opinion tunnels should not be forever doomed to waste bandwidth. The patch that was submitted should be reviewed, and I wouldn't like to give a response of "will not accept the feature, don't try again". I haven't looked deeply into the patch, so I don't know how close it's to my liking, but in principle transparent encoding/decoding in the TCP transport doesn't seem very complicated. It shouldn't affect e.g. rewinding, if all buffers are PCM, and just the in-transit data is compressed. (In reply to Tanu Kaskinen from comment #15) > (In reply to Arun Raghavan from comment #14) > > (In reply to Tanu Kaskinen from comment #13) > > > So, are you against any compression support in the native protocol or not? > > > > I am not in favour of having encoding/decoding being part of our protocol. > > This added complexity in the native protocol is not worth the gains for the > > (imo) relatively uncommon use-case of tunnel modules. > > Ok, so if it was up to you, tunnels would never ever transparently compress > the audio that gets sent over the network, because that causes an > uncomfortable amount of complexity in the native protocol. Actually, I did later add a way forwards -- support for compressed audio in the protocol (with compression left to clients, which tunnel could potentially do). It's not just uncomfortable complexity, but also a commitment to a single codec, a single implementation of that codec, or later exploding our internals to become a mini multimedia framework if we want to support more. > > I'm not against the native protocol supporting compressed audio. i.e. > > clients providing compressed audio for devices that support compressed > > playback. in fact, this is something I would actively like to have, but > > there are tricky bits to deal with latency reporting, rewinds, etc. > > Isn't this already supported? Or do you mean avoiding the IEC61937 wrapping? I mean without IEC61937 payloading, yes. Think AAC/MP3 in Bluetooth, or an ALSA compressed device that does decode + render. > > That said, if we had this, then the tunnel modules themselves could do the > > encode/decode. > > I don't follow. > > > I am curious about your views on this -- do you think this is something we > > should add to the native protocol, or are you batting for this since the > > work has been done, or ...? > > In my opinion tunnels should not be forever doomed to waste bandwidth. The > patch that was submitted should be reviewed, and I wouldn't like to give a > response of "will not accept the feature, don't try again". I haven't looked > deeply into the patch, so I don't know how close it's to my liking, but in > principle transparent encoding/decoding in the TCP transport doesn't seem > very complicated. It shouldn't affect e.g. rewinding, if all buffers are > PCM, and just the in-transit data is compressed. Except of course, it affects all the transports of the native protocol, not just TCP. (In reply to Arun Raghavan from comment #16) > Actually, I did later add a way forwards -- support for compressed audio in > the protocol (with compression left to clients, which tunnel could > potentially do). You seemed to talk only about hardware decoding, which is a different use case than saving bandwidth with regular hardware that expects PCM only. Or did I understand you wrong? > It's not just uncomfortable complexity, but also a commitment to a single > codec, a single implementation of that codec, or later exploding our > internals to become a mini multimedia framework if we want to support more. Committing to one codec seems a way better alternative than sticking to PCM only. If we had implemented this feature earlier, we might have chosen vorbis, and that choice might be slightly annoying now that we have a better codec available, but being stuck to PCM and vorbis would still be much better than being stuck to PCM. Not that I think we absolutely have to stick to just one codec. If someone wants a different codec later, we can discuss at that time whether it makes sense to add support for that codec or not. Why do you say that we'd be committing to a single implementation of a codec? AFAIK, there's a specification, and conforming implementations are supposed to be interoperable. > > In my opinion tunnels should not be forever doomed to waste bandwidth. The > > patch that was submitted should be reviewed, and I wouldn't like to give a > > response of "will not accept the feature, don't try again". I haven't looked > > deeply into the patch, so I don't know how close it's to my liking, but in > > principle transparent encoding/decoding in the TCP transport doesn't seem > > very complicated. It shouldn't affect e.g. rewinding, if all buffers are > > PCM, and just the in-transit data is compressed. > > Except of course, it affects all the transports of the native protocol, not > just TCP. Affects how? Surely we'd never enable compression over unix sockets or shm. -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/pulseaudio/pulseaudio/issues/483. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.