Our current StreamedMedia interface is insufficient for Muji (multi-user Jingle in MUCs), and also has the sub-objects-without-object-paths anti-pattern. We should replace it with a better API; Sjoerd is already working on one.
http://people.freedesktop.org/~sjoerd/telepathy-spec-new_media_spec/spec/
Just one small comment as I just took a quick look at the API: - Ringing() seems more like a signal name than a method name. I suggest to change it to SetRinging() or SetState(LocalStateEnum) in case we can add more local states later
Myself and Simon had a small spec meet last week about the spec, the following are the notes from them. * Hangup (ss) or (uss) => Close implies unexpected channel closure. - might be deprecated by TP 1.0 The hangup method on the Call channel should be able to take an error. We need to make up our minds if this should be (uss) (Error enum, Dbus error string, Debug message) or just (ss) (Dbus error string, Debug message).. The former has the advantage that we use the enum for categories that rarely gets extended, such that applications can fallback to using the enum value if they don't recognize the (more detailed) dbus error string. * AddContents: * flesh out the rationale for content name. * E_INVAL is only for content types ? E_INVAL maybe be NotCapable instead. What error should be reported when a content is added with a media type that the CM doesn't support. Also what error should be reported if a media type is added which isn't possible in this call (can't add a second video stream, can't add a content when the content set isn't mutable etc) * InitialTransport: s => Needs to be given a type, we might have one in the old api already * Make it very clear that we mandate that either InitialAudio or InitialVideo is mandatory. * Need a way to expose rtp profiles (AVP/AVPF) * Capability tokens need to be nicely namespaced. and also add a capability token for shared memory transport (as implemented by farsight) * HardwareStreaming needs to be specced as an immutable property * Add rationale hardware streaming (no need to start S-E, open a webcam etc etc if it's streamed by hardware) * Rumor has it that some stuff are partially hardware streamed (e.g. GSM for audio, SIP for video), would be good if we could verify it, although the wording already such that this is allowed. * unnamespaced asv keys should be in the same style as GObject properties * Ponder poking the possible handler before approvers are ongoing (so we can send candidates while the call isn't approvered yet). With ice you want to start exchanging candidates as soon as the call comes in (iotw, when the phone is ringing, which is when the call is at the approver stage). This means that ideally the handler would already have the channel. If another handler is decided it could restart the negotiation by doing an ice restart (although hopefully this will be uncommon...). So what we might need is an AddRequest like thing from mission-control, to warn handlers that they might get a channel. * Document and ponder when to actually start the outgoing calling. For voip calls this is not a problem, you can usually only start calling once you've given all the contents a set of codecs. But for calls with hardware streaming this might be a bit more tricky, as in, it might be undesirable to start dailing before the handler has popped up. Maybe the handler should also Accept() outgoing calls (and starts of in the pending call state)? * Work out division of responsibility between approver and handler. iotw, should the Handler or the Approver call Accept. Imho the handler should, so you can make sure that you have a Call UI once you accept the call. * Mention that CallState is exhaustive ? * CallState add 0 for unknown. The enum in the CallState should be complete, as in, it should encompass the complete state machine. Where extra information is part of the asv. Also, we should change the call state to (uua{sv}) aka (State, Flags, Info). Where minimal voip UI's should be able to get the basic information from the State + Flags and more complete handlers can get extra information from the info dict. * For 1-1 calls have a error state for the self-handle if they missed the call * Explicitely mention that the target handle is the person you initially called This is important for conference calls, where the person that initially called you or invited you might not be in the channel anymore, so handlers shouldn't rely on this * Invent a channelstate for the health of the channel To make it easier the understand the overall state of the channel we should have ChannelState property with an actor. So you only have to look at this property to decide for example that the call ended because the other side rejected it, without needing to dig in the CallState property * Have an InitialContent boolean instead of Disposition The Disposition enum has <none>, <early media>, <initial> but on protocols like SIP you can't actually know what stream is early media, so just <none>, <initial> might be better. In which case it can be replaced by a simple boolen. * StreamAdded might become plural * Stream: Document what the PendingSend sending state means for the self handle. The PendingSend state for the self handle indicates that the other side requested our side to start sending media, which can be done by calling SetSending. When a call is accepted, all _initial_ contents with streams in the PendingSend state for the self-handle are automatically set to sending. e.g. on an incoming call it means you need to Accept to start the actual call, on an outgoing call it might mean, you need to accept before actually starting the call.. * Stream: Rename Senders to Members On the Stream interface the Senders property should be renamed to Members for a bit more clarity * Need to ponder on calls from anonymous number.
Comments by andre which clashed with mine (thanks bugzilla) - Ringing() seems more like a signal name than a method name. I suggest to change it to SetRinging() or SetState(LocalStateEnum) in case we can add more local states later
oFono supports Deflect for voice calls. This may be something the Call interface wants to support as well. From the docs: void Deflect(string number) Deflects the incoming or waiting call to number given in the argument. This method is only valid if the call is in "incoming" or "waiting" state and the Call Deflection supplementary service is subscribed to. This functionality is generally implemented by using the +CHLD=4 * NUMBER command. This method should not be confused with the Transfer() method.
(In reply to comment #5) > oFono supports Deflect for voice calls. That's essentially a flavour of Transfer and/or Forwarding; I think it's necessary to make it supportable later, but not to support it now. Cloned as Bug #25295.
Assorted comments about this spec: I've hijacked the spec branch as smcv/call to make editorial changes, clarifications, etc. that I'm reasonably sure about. See that branch: * http://git.collabora.co.uk/?p=user/smcv/telepathy-spec-smcv.git;a=shortlog;h=refs/heads/call * http://people.freedesktop.org/~smcv/telepathy-spec-call/spec/ Stream_Transport_Type takes values Raw_UDP, ICE, GTALK_P2P, MSN and WLM2009. Why not Raw_UDP, ICE_UDP, GTalk_P2P, WLM_8_5 and WLM_2009, which would be the obvious mapping from the MediaSignalling transports? Is there some other source we're trying to remain consistent with? (The handler-capability-tokens in Call should be kept in sync with these.) Is Call_Flag_Ringing meant to work on outgoing 1-1 calls, or only on incoming calls? Does Call_Member_Flag_Ringing make sense in conference calls? Is it really useful to be able to have a conference call in which Will is a member and Sjoerd has the Ringing flag? If Call_Flag_Held means locally held (I believe it does), could we rename it accordingly? I believe I've addressed all the points brought up in the spec meeting, and applied "spec-linting" to the Call channel type. I haven't looked at the subsidiary objects in detail yet - it's taking longer than I expected!
(In reply to comment #7) > Assorted comments about this spec: > > I've hijacked the spec branch as smcv/call to make editorial changes, By hijacking you mean, we decided that you should do some linting ? :) > clarifications, etc. that I'm reasonably sure about. See that branch: > > http://git.collabora.co.uk/?p=user/smcv/telepathy-spec-smcv.git;a=shortlog;h=refs/heads/call > * http://people.freedesktop.org/~smcv/telepathy-spec-call/spec/ > > Stream_Transport_Type takes values Raw_UDP, ICE, GTALK_P2P, MSN and WLM2009. > Why not Raw_UDP, ICE_UDP, GTalk_P2P, WLM_8_5 and WLM_2009, which would be the > obvious mapping from the MediaSignalling transports? Is there some other source > we're trying to remain consistent with? Farsight2 obviously. Need to check with them why the use MSN as a stream type instead of calling it WLM_85, probably historical raisins > Is Call_Flag_Ringing meant to work on outgoing 1-1 calls, or only on incoming > calls? Only incoming, not sure what the meaning would be on outgoing calls. In the outgoing case the targetid has a flag saying its ringing > Does Call_Member_Flag_Ringing make sense in conference calls? Is it really > useful to be able to have a conference call in which Will is a member and > Sjoerd has the Ringing flag? You invited me, my phone is ringing but i didn't pick up yet ? > If Call_Flag_Held means locally held (I believe it does), could we rename it > accordingly? Sure, why not. the more confusion we can take away about which side is holding the better > I believe I've addressed all the points brought up in the spec meeting, and > applied "spec-linting" to the Call channel type. I haven't looked at the > subsidiary objects in detail yet - it's taking longer than I expected! It's almost like it's a big channel type :)
(In reply to comment #7) > Stream_Transport_Type takes values Raw_UDP, ICE, GTALK_P2P, MSN and WLM2009. > Why not Raw_UDP, ICE_UDP, GTalk_P2P, WLM_8_5 and WLM_2009, which would be the > obvious mapping from the MediaSignalling transports? Is there some other source > we're trying to remain consistent with? (The handler-capability-tokens in Call > should be kept in sync with these.) I've changed these in my branch, except that I left ICE as-is rather than renaming to ICE_UDP in case we want to use it for ICE-TCP. (Do we?) (In reply to comment #8) > (In reply to comment #7) > > Is Call_Flag_Ringing meant to work on outgoing 1-1 calls, or only on incoming > > calls? > Only incoming, not sure what the meaning would be on outgoing calls. In the > outgoing case the targetid has a flag saying its ringing I've renamed it to Call_Flag_Locally_Ringing and removed the FIXME. > > Does Call_Member_Flag_Ringing make sense in conference calls? Is it really > > useful to be able to have a conference call in which Will is a member and > > Sjoerd has the Ringing flag? > > You invited me, my phone is ringing but i didn't pick up yet ? Rationale added. > > If Call_Flag_Held means locally held (I believe it does), could we rename it > > accordingly? > > Sure, why not. the more confusion we can take away about which side is holding > the better Done. telepathy-qt4 already had a similar change, in its mapping from generated code to high-level API. More questions/gaps, having looked at Content and Stream in detail: * What significance/rationale does the Content.Name have? I assume that this is the mostly-opaque content name from Jingle? Do clients really need to be able to specify this? Do they have to be able to cope with being given a Name that wasn't what they asked for? * Content.Disposition seems to be a bit of a mixed bag. Should it really be an enum, or should it be a flag-set? * Please explain what's going on with "early media"? * Is Call.Accept canonically called Accept (as in Call) or Answer (as referenced in Content)? I've assumed the former. * Should Sending_State have a state for "I've asked the remote contact to shut up, but they haven't", for symmetry with Pending_Send? * Am I right in my clarifications of Stream and sending states? I got quite confused... Still to spec-lint: Content.I.Media, Stream.I.Media and Endpoint.
Looking at Content.I.Media and CodecOffer (see my call-media branch for editorial changes and clarifications): > FIXME: How should the streaming implementation know when it is its turn > to set the codecs. Well? :-) Sjoerd says we also need to think about what happens if a change to codecs via SetCodecs() but a remote contact doesn't like it. Call_Content_Codec_Offer also has FIXMEs for: * add Accepted and Rejected signals? * add error codes and strings to Reject()
(In reply to comment #9) > (In reply to comment #7) > > Stream_Transport_Type takes values Raw_UDP, ICE, GTALK_P2P, MSN and WLM2009. > > Why not Raw_UDP, ICE_UDP, GTalk_P2P, WLM_8_5 and WLM_2009, which would be the > > obvious mapping from the MediaSignalling transports? Is there some other source > > we're trying to remain consistent with? (The handler-capability-tokens in Call > > should be kept in sync with these.) > > I've changed these in my branch, except that I left ICE as-is rather than > renaming to ICE_UDP in case we want to use it for ICE-TCP. (Do we?) Probably better to just call it ICE_UDP for clarity. If ICE-TCP ever happens we probably won't use it on voip calls. > More questions/gaps, having looked at Content and Stream in detail: > > * What significance/rationale does the Content.Name have? I assume that this is > the mostly-opaque content name from Jingle? Yes > Do clients really need to be able to specify this? My nefarious plan is to start putting better names in it from the UI, because i want a nice demo UI that can both send a camera and say slides. In case of having multiple streams from the same content this names suddenly become a lot more useful for the UI. > Do they have to be able to cope with being given a Name that > wasn't what they asked for? Yes, the Name is only advisory. One shouldn't rely on it in any way. > * Content.Disposition seems to be a bit of a mixed bag. Should it really be an > enum, or should it be a flag-set? Enum is correct, olivier had the correct remark that the protocol doesn't always tell you things have early media. So maybe it might be better to just have an Initial boolean property > * Please explain what's going on with "early media"? We just don't know... Jingle has a way of telling you that a stream is going to use early media, SIP doesn't in any way and just sends you media whether you want it or not. Which makes me think we shouldn't need to distinguish this on the non-media interface after all > * Is Call.Accept canonically called Accept (as in Call) or Answer (as > referenced in Content)? I've assumed the former. Accept, it used to be Answer at some point i think and they got out of sync. Especially now both sides need to call Accept() Answer doesn't make sense anymore :) > * Should Sending_State have a state for "I've asked the remote contact to shut > up, but they haven't", for symmetry with Pending_Send? Maybe. Maybe have <tp:enumvalue suffix="Pending_Shutup" value="4"> or somesuch ? :) > * Am I right in my clarifications of Stream and sending states? I got quite > confused... Seems correct. > Still to spec-lint: Content.I.Media, Stream.I.Media and Endpoint. I've merged your branch as-is. The one comment i have is that we should probably make it so that there is always at least one content. Similar in spirit to: http://xmpp.org/extensions/xep-0166.html#def-action-content-remove As in if you're about the remove the last content, you should just end the call instead. Open question here is if trying to remove the last content will result in an error or the call being automagically hung up..
Stream.I.Media (also see updated smcv/call branch for some editorial fixes): * ServerInfoRetrieved and RetrievedServerInfo are ambiguous member names. Without looking at the spec, try to tell me which one is the signal and which one is the boolean property :-) Can we disambiguate these better? * It is claimed that STUNServers cannot change once the stream has been created. This seems likely to be a lie, given that we now have change notification of a sort? It should have proper change notification, though, if it can change (perhaps just an "added" signal). * Likewise, what is RelayInfo's change notification? * Does it ever make sense to remove a local candidate? If it does, we'll need a LocalCandidatesRemoved signal * What is LocalCredentials and what is its rationale? * LocalCredentials will need to be a named <tp:struct> (or a pair of string properties), otherwise telepathy-qt4 will be unable to bind it * How many times can LocalCredentialsSet happen? 0-1? 0-infinity? * Does SetCredentials() change LocalCredentials? How many times can it be called? * What is a candidate anyway? What is a component anyway? (Perhaps this interface is only meant for use by people who speak fluent RTP, but I'm only dimly aware of what a candidate is...) * Am I right in thinking that Stream.I.Media deals with local candidates (ways in which the local user tells remote users that we can perhaps be contacted) while remote candidates (ways in which remote users tell us they can perhaps be contacted) are all dealt with by Endpoint? The (missing) introductory docstring should say this sort of thing. * If I infer correctly that LocalCredentials, LocalCandidates come from the streaming implementation and nowhere else, do they actually need to be readable at all, or can they be "write-only" (i.e. not exist as properties at all, only as setter methods)? * In Candidate_Info: we conventionally use "g-object-case" for un-namespaced bags of strings, and reserve CamelCase for D-Bus properties. Or is there some external thing we're being consistent with? * The descriptions in Candidate_Info aren't sufficient for me to understand what they're for.
Stream.Endpoint: * nothing is documented or cross-referenced yet * SelectedCandidate / CandidateSelected are another "which is which?" property/signal pair * it's not clear why Transport is needed, since it seems to duplicate Stream.I.Media.Transport From IRC discussion: 17:13 < sjoerd> SIP has a fallback path from ice to raw-udp 17:13 < sjoerd> You send your offer with one raw-udp candidate and a set of ice ones 17:13 < sjoerd> if the other side doesn't support raw-udp it'll accept with one raw-udp candidate 17:14 < sjoerd> and you'll actually use raw-udp 17:14 < sjoerd> I'm vaguely pondering of having the STream transport stay ice in that case, but have the endpoint say raw-udp 17:14 < sjoerd> not entirely sure about it yet 17:15 < sjoerd> We're going to require that every streaming implemetation can do the fallback to raw-udp and always give you one candidate to use as the raw-udp candidte as well That last point should be written into the spec, if it's what we mean.
Current draft merged to master, will be in 0.19.0.
For the record, a multi party audio/video conference service that we could consider to support in some hypothetical future. Maybe worth to take a look on it to be sure that the future spec will be able to cope with it. http://www.adiumxtras.com/index.php?a=xtras&xtra_id=4959 http://www.mebeam.com/
Some comments while working on the Call example CM: * InitialTransport should be specified to be "" where not applicable, and in particular, on CMs with hardware streaming * The immutable properties on Channel, Stream and Content should all be annotated as such
More thoughts: Sjoerd and I agreed that the RequestableChannelClasses should be: * fixed = { Call, CONTACT, InitialAudio=TRUE }, allowed = { InitialVideo=TRUE } * fixed = { Call, CONTACT, InitialVideo=TRUE }, allowed = { InitialAudio=TRUE } i.e. clients aren't allowed to make outgoing calls that have neither initial audio nor initial video. This doesn't match StreamedMedia, and doesn't match telepathy-qt4's current handling of StreamedMedia capabilities - it'll be important to check at review that it gets this right. The initial streams should have the Initial disposition, even though that's not interesting on locally-initiated streams.
14:46 < smcv> sjoerd: is Call meant to have a way to remove contents? is Content meant to have a way to remove streams? 14:47 < smcv> sjoerd: at the moment it seems neither does 14:47 < smcv> sjoerd: I suppose Stream could interpret SetSending(FALSE), RequestReceiving(everyone, FALSE) as "remove yourself", but... For now, I'll apply #if 0 to the relevant bits.
RequestReceiving takes a handle argument, so I think it should be documented to raise InvalidHandle (if the handle is completely invalid) and InvalidArgument (if the handle is valid, but is not involved in this stream).
Should the peer(s) be removed from CallMembers on call termination? For now, my example code does remove them; not doing so would mean deleting about 10 LoC.
If we're treating the content names as significant, we should have a way to name the initial contents... at the moment my example CM (Bug #25416) treats them like nameless contents, and uses "audio" and "video". However, those aren't localized or anything. If a content of the same name already exists it'll use "foo (1)", "foo (2)" and so on.
Interactions with other interfaces: Chan.I.Hold is equally applicable to this API, and Chan.I.Hold.GetHoldState() (etc.) are a more informative version of (Chan.T.Call.CallFlags & Locally_Held). Should we just drop Locally_Held? Chan.I.CallState is obsoleted by Chan.T.Call.CallMembers. Chan.I.DTMF has methods that are in terms of integer stream IDs. We could either reimplement it as a channel interface in terms of streams' object paths, or reimplement it as a stream (or even content) interface? MediaSignalling, Media.StreamHandler, Media.SessionHandler are obsoleted (that's a large part of the point).
(In reply to comment #3) > * Stream: Rename Senders to Members > > On the Stream interface the Senders property should be renamed to Members for a > bit more clarity I support this. In the documentation there is the following bit: Media sent on this stream should be assumed to be received, directly or indirectly, by every other contact in the Senders mapping. P.S. And while you're at it, rename Farsight transmitters to tranceivers :-P
Any connection of Call.MutableContents and the ability to successfully call Stream.RequestReceiving? This is probably a different case, so should a property be added to Stream indicating that directionality of remote participants cannot be requested to change? The use case: SIP, where session mutability (re-INVITE requests) could have issues with certain servers.
I just noticed this bug exists, some comments on the candidates API comments: (In reply to comment #12) > * It is claimed that STUNServers cannot change once the stream has been > created. This seems likely to be a lie, given that we now have change > notification of a sort? It should have proper change notification, though, if > it can change (perhaps just an "added" signal). > * Likewise, what is RelayInfo's change notification? There probably should be separate signals for STUN servers and RelayInfo changing.. And the signal should probably contain the relevant information instead of having to do another round trip. > * Does it ever make sense to remove a local candidate? If it does, we'll need a > LocalCandidatesRemoved signal No, it never makes sense to remove one candidate. The only case where you want to remove candidates is when doing a ICE restart and that means removing all candidates. And we probably want a special API for that (to be added later I guess). Doing a ICE restart would tell tp-fs to give all the candidates again. > * How many times can LocalCredentialsSet happen? 0-1? 0-infinity? Once.. but they will be re-set if a ICE restart is requested. Btw, how does the client set the local credentials ? It should be a method, not a signal. > * Does SetCredentials() change LocalCredentials? How many times can it be > called? > > * What is a candidate anyway? What is a component anyway? (Perhaps this > interface is only meant for use by people who speak fluent RTP, but I'm only > dimly aware of what a candidate is...) Yea, that should be documented.. With references to the ICE almost-RFC. > * If I infer correctly that LocalCredentials, LocalCandidates come from the > streaming implementation and nowhere else, do they actually need to be readable > at all, or can they be "write-only" (i.e. not exist as properties at all, only > as setter methods)? I tend to agree here that they can't be re-used anyway, so there is no reason to make them readable.
While implementing Call support in tp-qt4 I came up with some questions/concerns as follows: - Call.Stream.RequestReceiving should accept more than one contact, I want all contacts to start/stop sending media to me for example - IMHO Call.Stream.Senders should be called Call.Stream.Members, they are members of the stream and may not be sending anything. - Add Call.RemoveContent - IMHO Call.Ringing should be called Call.SetRinging as it's a action not a signal Random ideas/doubts: - Looking at the Content API I believe streams will be added/removed automatically when someone joins/leaves the channel. Example: - A calls B and adds a content of type Audio and a content of type Video, a stream will be created automatically for each content and StreamAdded will be emitted accordingly - Now A or B invites C to join the channel, I believe a new stream will be created for each content to let user C join the conversation. If that is not the case, how can we add streams for C? And what happens if C does not support Audio calls? only video? Will the Audio content be just ignored or will it fail as C cannot join the conversation. - Right now we are using the DTMF iface to support DTMF, but it does require a stream ID. I believe the best thing to do here is to add a ID for Call.Stream or even Call.Content in case we want to send DTMF signals to all streams in a content. Adding an unique ID per stream/content would make it even easier for high-level API implementations, as SM already uses it and we can use it as a key for maps/hash when a content/stream is added/removed/changes. Using Stream object path is a solution but SM uses integer ids, so I just removed the usage of SM.Stream.ID as key for now. - There is no equivalent for Stream.State (none, disconnected, connected) in the call API, so I just dropped this in tp-qt4 high-level API. Maybe we need to have it and return Connected always for Call channels.
(In reply to comment #26) > - There is no equivalent for Stream.State (none, disconnected, connected) in > the call API, so I just dropped this in tp-qt4 high-level API. !!! Please don't break ABI/API of (what is now) a shared library until our next "flag day" in a few months, at which point the SONAME should change (libtelepathy-qt4.so.1). AIUI the rationale for no longer having State was that it wasn't practically useful, but that's no excuse for API breakage; pretend the stream is CONNECTED, and deprecate the accessor if you want.
(In reply to comment #26) > > - There is no equivalent for Stream.State (none, disconnected, connected) in > the call API, so I just dropped this in tp-qt4 high-level API. Maybe we need to > have it and return Connected always for Call channels. There is actually a stream state on the Endpoints interface.
For ICE->rawudp fallback, we probably want to have an API "ForcedRemoteCandates" or something, that will overrides the regular remote candidates. That API should be the only one called in the case of rawudp (ie.. in rawudp mode, the ICE candidates are ignored). In the Farstream api break, I plan to have rawudp implement only force_remote_candidates() and not add_remote_candidates(), the second one will be only for ICE and ICE-like protocols. Yes, every ICE implementation needs to support rawudp fallback, otherwise its not compliant (ie, its not ICE).
Also, don't confuse with SelectedCandidate, this is to tell other side how the ICE negotiation has concluded (it prevents a race in the ICE negotiation). So its unrelated to rawudp fallback.
StreamedMedia's use of the Group interface means it has difficulty representing a call to yourself (which might seem stupid, but in XMPP, you can call another of your own resources - e.g. you're in Empathy calling your own N900). If we care about this usage, we should make sure Call can support it without getting confused. If we don't, we should explicitly declare that it's out of scope.
(In reply to comment #22) > Interactions with other interfaces: > > > Chan.I.DTMF has methods that are in terms of integer stream IDs. We could > either reimplement it as a channel interface in terms of streams' object paths, > or reimplement it as a stream (or even content) interface? > Is there a use case for having DTMF operate on a single stream within a channel (or content) interface, without affecting the other streams? In other words, is there the potential to have multiple audio streams (that are sending) within a channel, and wanting DTMF tones to only happen on one of them? I think figuring that out is necessary for determining whether DTMF should happen at the stream, content, or channel interface. I'm not convinced that passing around stream objects/IDs is desirable. http://git.collabora.co.uk/?p=user/dilinger/telepathy-spec;a=summary has changes to DTMF to make it operate across all streams for a given channel, if that's the route we choose.
(In reply to comment #32) > (In reply to comment #22) > > Interactions with other interfaces: > > > > > > Chan.I.DTMF has methods that are in terms of integer stream IDs. We could > > either reimplement it as a channel interface in terms of streams' object paths, > > or reimplement it as a stream (or even content) interface? It should either be an interface on the Channel (with no stream identifier) or on the Content. The same "data" is sent on all streams of one Content. I'd personally be favorable to putting in on the Content. In any regular call there will be only one audio Content, so no loss there. And in the case of a fancy application, then the application can decide what it wants.
CodecOffer (and the NewCodecOffer signal) should offer a way for the streaming implementation to know which codecs changed and which didn't. It's not clear to me if it contains the codecs from everyone or just from those who joined/changed.
In my opinion, the CallMembers can be channel-specific handles if the Call is a Conference; we should explicitly say so, either here or in Conference, like so: property CallMembers [...] If the Call implements Group and the Group members are channel-specific handles, the CallMembers SHOULD also be channel-specific handles. | In telephony, [... talk about switchboard lines and anonymous users]. In Muji, [... talk about XEP-0045].
I believe there needs to be a method for Muting a Call channel. The rtcom bindings have a Channel.Interface.Mute (for StreamedMedia). The methods it exposes are 'b GetMuteState()' and 'RequestMute(b Mute)', as well as a MuteStateChanged signal. I believe this should be a CallMemberFlag within Call. We can then have a Channel.Interface.Mute, similar to how the Hold interface is handled (I don't see much point in having a MuteStateChanged, though, since Call will raise a CallStateChanged signal when Mute changes. It will do it when Hold changes, for that matter, making Hold's HoldStateChanged signal redundant).
(In reply to comment #36) > I believe there needs to be a method for Muting a Call channel. The rtcom > bindings have a Channel.Interface.Mute (for StreamedMedia). The methods it > exposes are 'b GetMuteState()' and 'RequestMute(b Mute)', as well as a > MuteStateChanged signal. > > I believe this should be a CallMemberFlag within Call. We can then have a > Channel.Interface.Mute, similar to how the Hold interface is handled (I don't > see much point in having a MuteStateChanged, though, since Call will raise a > CallStateChanged signal when Mute changes. It will do it when Hold changes, > for that matter, making Hold's HoldStateChanged signal redundant). > I've opened #26807 for this.
Another comment, you need to add a way for the streaming implementation to tell the CM that starting to send has failed (ie, I can't open the camera device because its in use). And that should result in the change in Senders to fail and the UI being informed of that. Not in ending the video stream like the old StreamedMedia.
Re org.freedesktop.Telepathy.Call.Content.Interface.Media, comment "FIXME: How should the streaming implementation know when it is its turn to set the codecs": there should be a boolean property similar to the current StreamHandler.CreatedLocally, that will tell the streaming engine to defer setting local codecs until a remote codec offer is received. This could be used to avoid unnecessary codec enumeration for a received call, and issues with remapping payload types. I don't think the Creator property on Content is good for this, as it would force the streaming implementation to be aware of handles and the channel object.
Some more thoughs: * Add an signal to Stream.I.Media for the CM to request an ICE restart, which means the CM will forget all local candidates/credentials and the streaming implementation is expected to create new ones (but streaming will continue to the currently selected candidate). * Add a SetLocalCredentials to Stream.I.Media (instead of always putting them in the stream) * ICE is now RFC 5245 and TURN RFC 5766 * Think about dropping WLM_8_5, the servers have been shutdown * To Content.I.Media's codecs: * A way to specify which AVPF extra reports are supported. Probably as a{ss} like extra codec parameters (but they are different and should not be mixed). * To Content.I.Media: * Add a way to signal supported RTCP Extended Report blocks (aka RTCP XR from RFC 3611). As set of a{ss} LocalRtcpXr and RemoteRtcpXr on the Content. To be set with SetLocalRtcpXr(a{ss}) and notified by RemoteRtcpXr changed signals. That said, I haven't look at every details of RTCP XR, so I'm not 100% sure how this should look.
*** Bug 20809 has been marked as a duplicate of this bug. ***
(In reply to comment #40) > Some more thoughs: > > * Add an signal to Stream.I.Media for the CM to request an ICE restart, which > means the CM will forget all local candidates/credentials and the streaming > implementation is expected to create new ones (but streaming will continue to > the currently selected candidate). # 28690 > * Add a SetLocalCredentials to Stream.I.Media (instead of always putting them > in the stream) #28689 > * ICE is now RFC 5245 and TURN RFC 5766 > * Think about dropping WLM_8_5, the servers have been shutdown #28688 > * To Content.I.Media's codecs: > * A way to specify which AVPF extra reports are supported. Probably as a{ss} > like extra codec parameters (but they are different and should not be mixed). #28687 > * To Content.I.Media: > * Add a way to signal supported RTCP Extended Report blocks (aka RTCP XR from > RFC 3611). As set of a{ss} LocalRtcpXr and RemoteRtcpXr on the Content. To be > set with SetLocalRtcpXr(a{ss}) and notified by RemoteRtcpXr changed signals. > That said, I haven't look at every details of RTCP XR, so I'm not 100% sure how > this should look. #28686
(In reply to comment #39) > Re org.freedesktop.Telepathy.Call.Content.Interface.Media, comment "FIXME: How > should the streaming implementation know when it is its turn to set the > codecs": there should be a boolean property similar to the current > StreamHandler.CreatedLocally, that will tell the streaming engine to defer > setting local codecs until a remote codec offer is received. > This could be used to avoid unnecessary codec enumeration for a received call, > and issues with remapping payload types. > I don't think the Creator property on Content is good for this, as it would > force the streaming implementation to be aware of handles and the channel > object. The Creator property doesn't work very well it seems (especially in Muji there isn't really a creator in the first place). I'll probably remove that one. CreatedLocally is possible but again in the Muji case it's a bit more complex. Filed #28692 to track this issue
(In reply to comment #38) > Another comment, you need to add a way for the streaming implementation to tell > the CM that starting to send has failed (ie, I can't open the camera device > because its in use). And that should result in the change in Senders to fail > and the UI being informed of that. Not in ending the video stream like the old > StreamedMedia. #28693
(In reply to comment #35) > In my opinion, the CallMembers can be channel-specific handles if the Call is a > Conference; we should explicitly say so, either here or in Conference, like so: > > property CallMembers > > [...] > > If the Call implements Group and the Group members are channel-specific > handles, the CallMembers SHOULD also be channel-specific handles. > > | In telephony, [... talk about switchboard lines and anonymous users]. In > Muji, [... talk about XEP-0045]. #28694
(In reply to comment #34) > CodecOffer (and the NewCodecOffer signal) should offer a way for the streaming > implementation to know which codecs changed and which didn't. It's not clear to > me if it contains the codecs from everyone or just from those who > joined/changed. #28695
(In reply to comment #33) > (In reply to comment #32) > > (In reply to comment #22) > > > Interactions with other interfaces: > > > > > > > > > Chan.I.DTMF has methods that are in terms of integer stream IDs. We could > > > either reimplement it as a channel interface in terms of streams' object paths, > > > or reimplement it as a stream (or even content) interface? > > It should either be an interface on the Channel (with no stream identifier) or > on the Content. The same "data" is sent on all streams of one Content. I'd > personally be favorable to putting in on the Content. In any regular call there > will be only one audio Content, so no loss there. And in the case of a fancy > application, then the application can decide what it wants. #28696
(In reply to comment #31) > StreamedMedia's use of the Group interface means it has difficulty representing > a call to yourself (which might seem stupid, but in XMPP, you can call another > of your own resources - e.g. you're in Empathy calling your own N900). > > If we care about this usage, we should make sure Call can support it without > getting confused. If we don't, we should explicitly declare that it's out of > scope. #28697
(In reply to comment #30) > Also, don't confuse with SelectedCandidate, this is to tell other side how the > ICE negotiation has concluded (it prevents a race in the ICE negotiation). So > its unrelated to rawudp fallback. #28698
(In reply to comment #26) > While implementing Call support in tp-qt4 I came up with some > questions/concerns as follows: > > - Call.Stream.RequestReceiving should accept more than one contact, I want all > contacts to start/stop sending media to me for example #28699 > - IMHO Call.Stream.Senders should be called Call.Stream.Members, they are > members of the stream and may not be sending anything. #28700 > - Add Call.RemoveContent #28701 > - IMHO Call.Ringing should be called Call.SetRinging as it's a action not a > signal #28702 > Random ideas/doubts: > - Looking at the Content API I believe streams will be added/removed > automatically when someone joins/leaves the channel. > Example: > - A calls B and adds a content of type Audio and a content of type Video, a > stream will be created automatically for each content and StreamAdded will be > emitted accordingly > - Now A or B invites C to join the channel, I believe a new stream will be > created for each content to let user C join the conversation. If that is not > the case, how can we add streams for C? And what happens if C does not support > Audio calls? only video? Will the Audio content be just ignored or will it fail > as C cannot join the conversation. See #28703 > - Right now we are using the DTMF iface to support DTMF, but it does require a > stream ID. I believe the best thing to do here is to add a ID for Call.Stream > or even Call.Content in case we want to send DTMF signals to all streams in a > content. #28696 (which i think is already solved by the DTMF interface) > Adding an unique ID per stream/content would make it even easier for > high-level API implementations, as SM already uses it and we can use it as a > key for maps/hash when a content/stream is added/removed/changes. Using Stream > object path is a solution but SM uses integer ids, so I just removed the usage > of SM.Stream.ID as key for now. Yeah, not going to do integer id for streams > - There is no equivalent for Stream.State (none, disconnected, connected) in > the call API, so I just dropped this in tp-qt4 high-level API. Maybe we need to > have it and return Connected always for Call channels. That's on the endpoints (as there can be multiple ones in ICE forking)
(In reply to comment #25) > I just noticed this bug exists, some comments on the candidates API comments: > > (In reply to comment #12) > > * It is claimed that STUNServers cannot change once the stream has been > > created. This seems likely to be a lie, given that we now have change > > notification of a sort? It should have proper change notification, though, if > > it can change (perhaps just an "added" signal). > > * Likewise, what is RelayInfo's change notification? > > There probably should be separate signals for STUN servers and RelayInfo > changing.. And the signal should probably contain the relevant information > instead of having to do another round trip. #28704 > > * Does it ever make sense to remove a local candidate? If it does, we'll need a > > LocalCandidatesRemoved signal > > No, it never makes sense to remove one candidate. The only case where you want > to remove candidates is when doing a ICE restart and that means removing all > candidates. And we probably want a special API for that (to be added later I > guess). Doing a ICE restart would tell tp-fs to give all the candidates again. ice restarts is #28690 > > * How many times can LocalCredentialsSet happen? 0-1? 0-infinity? > > Once.. but they will be re-set if a ICE restart is requested. > > Btw, how does the client set the local credentials ? It should be a method, not > a signal. > > > * Does SetCredentials() change LocalCredentials? How many times can it be > > called? > > > > * What is a candidate anyway? What is a component anyway? (Perhaps this > > interface is only meant for use by people who speak fluent RTP, but I'm only > > dimly aware of what a candidate is...) > > Yea, that should be documented.. With references to the ICE almost-RFC. #28705 > > > * If I infer correctly that LocalCredentials, LocalCandidates come from the > > streaming implementation and nowhere else, do they actually need to be readable > > at all, or can they be "write-only" (i.e. not exist as properties at all, only > > as setter methods)? > > I tend to agree here that they can't be re-used anyway, so there is no reason > to make them readable. There also isn't a reason to make them not readable though, i could go either way?
(In reply to comment #24) > Any connection of Call.MutableContents and the ability to successfully call > Stream.RequestReceiving? > This is probably a different case, so should a property be added to Stream > indicating that directionality of remote participants cannot be requested to > change? That's #28706 > The use case: SIP, where session mutability (re-INVITE requests) could have > issues with certain servers. In that case sofiasip should set MutableContents to FALSE, RequestReceiving is about indicating to the other side you would like to receive some of their media which isn't possible in all protocols
(In reply to comment #22) > Interactions with other interfaces: > > Chan.I.Hold is equally applicable to this API, and Chan.I.Hold.GetHoldState() > (etc.) are a more informative version of (Chan.T.Call.CallFlags & > Locally_Held). Should we just drop Locally_Held? #28707
(In reply to comment #20) > Should the peer(s) be removed from CallMembers on call termination? For now, my > example code does remove them; not doing so would mean deleting about 10 LoC. #28709
(In reply to comment #19) > RequestReceiving takes a handle argument, so I think it should be documented to > raise InvalidHandle (if the handle is completely invalid) and InvalidArgument > (if the handle is valid, but is not involved in this stream). #28710
(In reply to comment #18) > 14:46 < smcv> sjoerd: is Call meant to have a way to remove contents? is > Content meant to have a way to remove streams? > 14:47 < smcv> sjoerd: at the moment it seems neither does > 14:47 < smcv> sjoerd: I suppose Stream could interpret SetSending(FALSE), > RequestReceiving(everyone, FALSE) as "remove yourself", but... > > For now, I'll apply #if 0 to the relevant bits. Removing contents is #28701 , i wouldn't know what removing a stream would mean on the wire so i don't thin it's applicable
(In reply to comment #15) > For the record, a multi party audio/video conference service that we could > consider to support in some hypothetical future. > Maybe worth to take a look on it to be sure that the future spec will be able > to cope with it. > > > http://www.adiumxtras.com/index.php?a=xtras&xtra_id=4959 > http://www.mebeam.com/ I'm not sure what setup they use, but apart from the muji style of conferencing the basic other style is to have one signalling/media focus. filed as #28718
(In reply to comment #13) > That last point should be written into the spec, if it's what we mean. #28719
(In reply to comment #12) > Stream.I.Media (also see updated smcv/call branch for some editorial fixes): > > * ServerInfoRetrieved and RetrievedServerInfo are ambiguous member names. > Without looking at the spec, try to tell me which one is the signal and which > one is the boolean property :-) Can we disambiguate these better? > > * It is claimed that STUNServers cannot change once the stream has been > created. This seems likely to be a lie, given that we now have change > notification of a sort? It should have proper change notification, though, if > it can change (perhaps just an "added" signal). > > * Likewise, what is RelayInfo's change notification? #28704 > * Does it ever make sense to remove a local candidate? If it does, we'll need a > LocalCandidatesRemoved signal > > * What is LocalCredentials and what is its rationale? > > * LocalCredentials will need to be a named <tp:struct> (or a pair of string > properties), otherwise telepathy-qt4 will be unable to bind it > > * How many times can LocalCredentialsSet happen? 0-1? 0-infinity? > > * Does SetCredentials() change LocalCredentials? How many times can it be > called? > > * What is a candidate anyway? What is a component anyway? (Perhaps this > interface is only meant for use by people who speak fluent RTP, but I'm only > dimly aware of what a candidate is...) > > * Am I right in thinking that Stream.I.Media deals with local candidates (ways > in which the local user tells remote users that we can perhaps be contacted) > while remote candidates (ways in which remote users tell us they can perhaps be > contacted) are all dealt with by Endpoint? The (missing) introductory docstring > should say this sort of thing. > > * If I infer correctly that LocalCredentials, LocalCandidates come from the > streaming implementation and nowhere else, do they actually need to be readable > at all, or can they be "write-only" (i.e. not exist as properties at all, only > as setter methods)? > > * In Candidate_Info: we conventionally use "g-object-case" for un-namespaced > bags of strings, and reserve CamelCase for D-Bus properties. Or is there some > external thing we're being consistent with? > > * The descriptions in Candidate_Info aren't sufficient for me to understand > what they're for. #28705
(In reply to comment #11) > (In reply to comment #9) > > (In reply to comment #7) > > > Stream_Transport_Type takes values Raw_UDP, ICE, GTALK_P2P, MSN and WLM2009. > > > Why not Raw_UDP, ICE_UDP, GTalk_P2P, WLM_8_5 and WLM_2009, which would be the > > > obvious mapping from the MediaSignalling transports? Is there some other source > > > we're trying to remain consistent with? (The handler-capability-tokens in Call > > > should be kept in sync with these.) > > > > I've changed these in my branch, except that I left ICE as-is rather than > > renaming to ICE_UDP in case we want to use it for ICE-TCP. (Do we?) > > Probably better to just call it ICE_UDP for clarity. If ICE-TCP ever happens we > probably won't use it on voip calls. #28688 > > More questions/gaps, having looked at Content and Stream in detail: > > > > * What significance/rationale does the Content.Name have? I assume that this is > > the mostly-opaque content name from Jingle? > Yes > > Do clients really need to be able to specify this? > > My nefarious plan is to start putting better names in it from the UI, because i > want a nice demo UI that can both send a camera and say slides. In case of > having multiple streams from the same content this names suddenly become a lot > more useful for the UI. > > > Do they have to be able to cope with being given a Name that > > wasn't what they asked for? > > Yes, the Name is only advisory. One shouldn't rely on it in any way. see also #28708 > > > * Content.Disposition seems to be a bit of a mixed bag. Should it really be an > > enum, or should it be a flag-set? > Enum is correct, olivier had the correct remark that the protocol doesn't > always tell you things have early media. So maybe it might be better to just > have an Initial boolean property > > > * Please explain what's going on with "early media"? > > We just don't know... Jingle has a way of telling you that a stream is going to > use early media, SIP doesn't in any way and just sends you media whether you > want it or not. Which makes me think we shouldn't need to distinguish this on > the non-media interface after all #28720 > > * Is Call.Accept canonically called Accept (as in Call) or Answer (as > > referenced in Content)? I've assumed the former. > > Accept, it used to be Answer at some point i think and they got out of sync. > Especially now both sides need to call Accept() Answer doesn't make sense > anymore :) > > * Should Sending_State have a state for "I've asked the remote contact to shut > > up, but they haven't", for symmetry with Pending_Send? > > Maybe. Maybe have <tp:enumvalue suffix="Pending_Shutup" value="4"> or somesuch > ? :) #28721 > > * Am I right in my clarifications of Stream and sending states? I got quite > > confused... > Seems correct. > > > Still to spec-lint: Content.I.Media, Stream.I.Media and Endpoint. > > I've merged your branch as-is. The one comment i have is that we should > probably make it so that there is always at least one content. Similar in > spirit to: > http://xmpp.org/extensions/xep-0166.html#def-action-content-remove > > As in if you're about the remove the last content, you should just end the call > instead. Open question here is if trying to remove the last content will result > in an error or the call being automagically hung up..
(In reply to comment #10) > Looking at Content.I.Media and CodecOffer (see my call-media branch for > editorial changes and clarifications): > > > FIXME: How should the streaming implementation know when it is its turn > > to set the codecs. > > Well? :-) #28692 :) > Sjoerd says we also need to think about what happens if a change to codecs via > SetCodecs() but a remote contact doesn't like it. #28723 > Call_Content_Codec_Offer also has FIXMEs for: > > * add Accepted and Rejected signals? > * add error codes and strings to Reject() #28722
(In reply to comment #3) > Myself and Simon had a small spec meet last week about the spec, the following > are the notes from them. > > * Hangup (ss) or (uss) => Close implies unexpected channel closure. > - might be deprecated by TP 1.0 > > The hangup method on the Call channel should be able to take an error. We need > to make up our minds if this should be (uss) (Error enum, Dbus error string, > Debug message) or just (ss) (Dbus error string, Debug message).. > > The former has the advantage that we use the enum for categories that rarely > gets extended, such that applications can fallback to using the enum value if > they don't recognize the (more detailed) dbus error string. Gone with the former, done > * AddContents: > * flesh out the rationale for content name. > * E_INVAL is only for content types ? E_INVAL maybe be NotCapable instead. > > What error should be reported when a content is added with a media type that > the CM doesn't support. Also what error should be reported if a media type is > added which isn't possible in this call (can't add a second video stream, can't > add a content when the content set isn't mutable etc) #28724 > * InitialTransport: s => Needs to be given a type, we might have one in the old > api already #28725 > * Make it very clear that we mandate that either InitialAudio or InitialVideo > is mandatory. #28717 > * Need a way to expose rtp profiles (AVP/AVPF) #28687 and probably #28686 > * Capability tokens need to be nicely namespaced. and also add a capability > token for shared memory transport (as implemented by farsight) Add capability token for shared memory transport > * HardwareStreaming needs to be specced as an immutable property #28728 > * Add rationale hardware streaming (no need to start S-E, open a webcam etc > etc > if it's streamed by hardware) Done > * Rumor has it that some stuff are partially hardware streamed (e.g. GSM for > audio, SIP for video), > would be good if we could verify it, although the wording already such that > this is allowed. #28729 > * unnamespaced asv keys should be in the same style as GObject properties assuming spec linting solved this > * Ponder poking the possible handler before approvers are ongoing (so we can > send candidates while the call isn't approvered yet). > > With ice you want to start exchanging candidates as soon as the call comes in > (iotw, when the phone is ringing, which is when the call is at the approver > stage). This means that ideally the handler would already have the channel. If > another handler is decided it could restart the negotiation by doing an ice > restart (although hopefully this will be uncommon...). So what we might need is > an AddRequest like thing from mission-control, to warn handlers that they might > get a channel. > > * Document and ponder when to actually start the outgoing calling. > > For voip calls this is not a problem, you can usually only start calling once > you've given all the contents a set of codecs. But for calls with hardware > streaming this might be a bit more tricky, as in, > it might be undesirable to start dailing before the handler has popped up. > Maybe the handler should also Accept() outgoing calls (and starts of in the > pending call state)? > * Work out division of responsibility between approver and handler. > > iotw, should the Handler or the Approver call Accept. Imho the handler should, > so you can make sure that you have a Call UI once you accept the call. Solved by always requiring the handler to call Accept before startin the call. > * Mention that CallState is exhaustive ? > * CallState add 0 for unknown. Done > The enum in the CallState should be complete, as in, it should encompass the > complete state machine. Where extra information is part of the asv. Also, we > should change the call state to (uua{sv}) aka (State, Flags, Info). Where > minimal voip UI's should be able to get the basic information from the State + > Flags and more complete handlers can get extra information from the info dict. Done > * For 1-1 calls have a error state for the self-handle if they missed the call #28731 > * Explicitely mention that the target handle is the person you initially called > > This is important for conference calls, where the person that initially called > you or invited you might not be in the channel anymore, so handlers shouldn't > rely on this #28732 > * Invent a channelstate for the health of the channel > > To make it easier the understand the overall state of the channel we should > have ChannelState property with an actor. So you only have to look at this > property to decide for example that the call ended because the other side > rejected it, without needing to dig in the CallState property #28733 > * Have an InitialContent boolean instead of Disposition > > The Disposition enum has <none>, <early media>, <initial> but on protocols like > SIP you can't actually know what stream is early media, so just <none>, > <initial> might be better. In which case it can be replaced by a simple boolen. #28720 > * StreamAdded might become plural #28736 > * Stream: Document what the PendingSend sending state means for the self > handle. > The PendingSend state for the self handle indicates that the other side > requested our side to start sending media, which can be done by calling > SetSending. When a call is accepted, all _initial_ contents with streams in the > PendingSend state for the self-handle are automatically set to sending. > > e.g. on an incoming call it means you need to Accept to start the actual call, > on an outgoing call it might mean, you need to accept before actually starting > the call.. #28735 > * Stream: Rename Senders to Members > > On the Stream interface the Senders property should be renamed to Members for a > bit more clarity #28700 > * Need to ponder on calls from anonymous number. #28730
(In reply to comment #3) > What error should be reported when a content is added with a media type that > the CM doesn't support. Also what error should be reported if a media type is > added which isn't possible in this call (can't add a second video stream, can't > add a content when the content set isn't mutable etc) There should be a property listing currently addable content types, so that the UI can hide upgrade options that don't work.
Bug #37291 to clarify the semantics of Call.Stream.LocalSendingState.
For the record https://bugzilla.gnome.org/show_bug.cgi?id=659683 is the Empathy bug about making empathy-call the default. This is obviously blocked by this bug. Any chance to see it fixed in 6 months? It would be ace to have this for GNOME 3.4.
Don't hold your breath, it still needs to be implemented in every connection manager...
What is the plan with the new versioning scheme that Call has? Is Call1 a stable version that will be replaced by Call2 when needed? Is the versioning just for drafts and will be removed in the final version? I'm just wondering how this should be handled in tp-qt4.
My understanding is that Call1 is the name of the spec which will be replaced by Call2. And that the place is that all new interfaces will be versioned from now on.
(In reply to comment #68) > My understanding is that Call1 is the name of the spec which will be replaced > by Call2. And that the place is that all new interfaces will be versioned from > now on. The current spec release up on tp.fd.o lists the Call1 things as unstable, with the usual warning: WARNING: This interface is experimental and is likely to cause havoc to your API/ABI if bindings are generated. Do not include this interface in libraries that care about compatibility. If the intent is that the currently specified Call1 is the final Call1, and not Call0.999some-maybe-going-to-turn-Call1-stable-one day, that should be declared in the spec and the causes-havoc annotations removed. At that point, Call1 can't be changed anymore though of course, and will need to be duplicated as Call2 for further incompatible development.
I think we want to implement it in a CM and a UI first I guess.
(In reply to comment #70) > I think we want to implement it in a CM and a UI first I guess. I see, so it's still really just a Call1 "pre-release", not the actual stable first version then.
Fixed in 0.25.2 (!)
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.