Summary: | Call should be able to cope with a conference layout where we're the signalling/media focus | ||
---|---|---|---|
Product: | Telepathy | Reporter: | Sjoerd Simons <sjoerd> |
Component: | tp-spec | Assignee: | Telepathy bugs list <telepathy-bugs> |
Status: | RESOLVED MOVED | QA Contact: | Telepathy bugs list <telepathy-bugs> |
Severity: | normal | ||
Priority: | medium | CC: | david.laban, olivier.crete |
Version: | unspecified | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | Call-later | ||
i915 platform: | i915 features: |
Description
Sjoerd Simons
2010-06-24 05:06:51 UTC
For this, you should have only one Stream, one Endpoint and one CodecOffer per Content (since you're really negotiating with the server). Probably we want the RemoteSSRCs to be a map of contact->ssrc on the Content instead of putting it on the Endpoint. I believe this would match RFC 4575 better. Actually, if the remote side does audio mixing, all we need is the handle->ssrc map in the Content so we can match the CSRC and we can even do fancier stuff like: http://tools.ietf.org/html/draft-ietf-avtext-mixer-to-client-audio-level-01 Maybe we want to have MediaDescriptions with only a Contact/SSRC for the CSRCs and MD with Contact=0 for the mixer ? oh ffs. Why do my comments keep disappearing? When I added myself to the cc list, my comment was supposed to be: "I think that ocrete's comment addresses the "someone else is acting as a mixer" case, but not the "we are acting as a mixer" case. I'm thinking that we could use MediaDescriptions to get the CM to request for the streaming implementation to mix two streams together. Enum: MD.I.LocalMediaMixing.Type: * Non-mixed (1:1 or muji-style) * LocallyMixed (useful for audio) * LocallyAggregated (useful for video) au: MD.I.LocalMediaMixing.Peers: * list of contacts that need to be mixed to/from this party, or 0 for "all" Under this model, we would have one stream per participant, and mixing between parties that requested it (in most cases everyone)." Does this make sense? It's more complicated than that, we can negotiate different codec with different remote parties if we're a mixer (since we must decode/encode too). It would be easier to express it as multiple contents maybe.. I'm really not certain how to do it nicely. Or maybe just have multiple Call Channels and have the UI do the mixing ? http://cgit.collabora.com/git/user/alsuren/telepathy-spec.git/commit/?h=local-descriptions-28718&id=ed6d69a34f3e546305632ed1cbca2135e7bff7a0 reverts a change by ocrete and makes some clarifications. This should make it easier to support this later. (I think all we need is some client caps or something at this point and something that implements it.) note that http://cgit.collabora.com/git/user/alsuren/telepathy-spec.git/commit/?h=local-descriptions-28718 has been rebased on top of alsuren/call, so that link no longer points to the correct commit id (a one-character change was required to make it compile again). I guess this is ok.. but I'd like to see some kind of implementation of the client-side bit at least Merged to alsuren/call, but keeping the bug open until we have an implementation. (In reply to comment #8) > Merged to alsuren/call, but keeping the bug open until we have an implementation. Is this part of Call1? If it is, I guess we can close this bug. The full support for this isn't in Call yet, but it can be added later on top of the existing spec. -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/telepathy/telepathy-spec/issues/73. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.