I wrote about this in the telepathy ML, but the discussion didn't went on: http://lists.freedesktop.org/archives/telepathy/2007-August/001078.html Please read the mail for an explanation of the problem. My proposed solution is to have a RequestAvatars() API that requests several avatars at once, which will be returned as function output (not as a signal). If there are DBus limitations against this (maximum number of pending calls), then the problem is on DBus side -- you shouldn't hack on the API.
I guess this is not a bug, actually if you make a RequestAvatars to block again it will turn to be a regression. You cannot handle NoReply and you probably want to other clients to have access of the avatar you just request because it is an expensive operation.
(In reply to comment #1) > I guess this is not a bug, actually if you make a RequestAvatars to block again > it will turn to be a regression. You cannot handle NoReply Isn't it possible to return an empty array, or 0-length avatars? > and you probably > want to other clients to have access of the avatar you just request because it > is an expensive operation. Avatars can be cached by the CM. Any API will be better than the current one, because now when an avatar changes and you have three processes interested in it, you will receive 3 AvatarRetrieved signals, each one waking up 3 processes.
> Isn't it possible to return an empty array, or 0-length avatars? Not sure what you are aiming with this, but you may realize that there is no timeout on DBusMessage, and even if there is a way to know how much time the client will wait for the reply returning a NULL avatar will not solve the problem, anyway an error perhaps is more suitable for it thought you have to rely on CM cache if there is any and that way you probably duplicate the cache on application side too. I don't find this anyway better than what we already have or do you find better to call a method repetitively until you got the proper answer better than wait for signals? > Avatars can be cached by the CM. Any API will be better than the current one, > because now when an avatar changes and you have three processes interested in > it, you will receive 3 AvatarRetrieved signals, each one waking up 3 processes. Avatar are not changed constantly, and besides that you need to be more afraid of bindings that add match for any signal, that is a real problem and can make application to wakeup to many times.
The major problem we solve by using signals is that receiving a large avatar can take longer than the D-Bus method call timeout; once we've done the expensive part of the call (getting the avatar across the network), we might as well hand it over to the client (the cheap part of the call) even if it took so long that a method call would have timed out. Also, doing many method calls in parallel is problematic, because until very recent versions of D-Bus (which are not yet used on embedded platforms like the N800), there was a small maximum number of parallel pending calls allowed by the bus daemon (I think it was 32). > Avatars can be cached by the CM. Connection managers should not cache avatars. The Telepathy specification is meant to encode what happens in the underlying IM protocol, and connection managers are just meant to bridge between the underlying protocol and the client. The intention was always that clients are responsible for caching avatars: - clients can do this in a uniform way across all connection managers - cached avatars can be usable when no CM is running (e.g. when you're offline) - connection managers should not enforce a particular storage policy for avatars (for instance, an appropriate cache size for a desktop PC would be excessively large for an N800, and the desired location is likely to be different) - connection managers should not write to the filesystem (so they can be jailed with e.g. SELinux in future) - connection managers should not be configurable except via the D-Bus API at runtime (for simplification, and e.g. SELinux jailing)
(In reply to comment #5) > The major problem we solve by using signals is that receiving a large avatar > can take longer than the D-Bus method call timeout; once we've done the > expensive part of the call (getting the avatar across the network), we might as > well hand it over to the client (the cheap part of the call) even if it took so > long that a method call would have timed out. Well, the timeout can be raised up to several hours, that should be enough :-) > Also, doing many method calls in parallel is problematic, because until very > recent versions of D-Bus (which are not yet used on embedded platforms like the > N800), there was a small maximum number of parallel pending calls allowed by > the bus daemon (I think it was 32). Yes, it is 32 in the N800. But I don't think that the specs should take that into too much consideration; my suggestion is: don't deprecate the RequestAvatar method, and add a RequestAvatars-like API that returns the avatars in its return value. And it wouldn't be too bad to add the avatar data in the Avatar(s)Updated signal itself, since most listeners of this signal would also want to retrieve the avatar, once they see that the token has changed. In this way the RequestAvatars method calls would probably be more rare.
> And it wouldn't be too bad to add the avatar data in the Avatar(s)Updated > signal itself, since most listeners of this signal would also want to retrieve > the avatar, once they see that the token has changed. In this way the > RequestAvatars method calls would probably be more rare. (Here I say "Empathy" but it can be any tp client) RequestAvatars is already extremely rare, Empathy cache avatars on file, once Empathy gets the token it gets the avatar from the cache most time and don't call RequestAvatars. If empathy don't have the avatar in cache it's most likely that all processes listening to AvatarRetrieved would need the avatar data anyway. MC is a bit special here since it only needs self avatar so if it listen to AvatarRetrieved it will get all avatars. It can be optimised by connecting this signal only when MC receives AvatarUpdated(self handle) with a different token than the one currently stored, then call RequestAvatars(self handle) and disconnect the AvatarRetrieved once MC gets new self avatar data. And CM should wait a bit when receiving RequestAvatars() because it's likely to get one request from many clients at once, so it could emit AvatarRetrieved only once with all avatars requested.
(In reply to comment #7) > And CM should wait a bit when receiving RequestAvatars() because it's likely to > get one request from many clients at once, so it could emit AvatarRetrieved > only once with all avatars requested. You know it's broken when 1) clients are expected to implement caching on top of what's already done by CMs, and 2) a CM should jump through hoops to implement it efficiently. I propose deprecating RequestAvatars for a new method that returns the available avatars immediately; let's name it LookupAvatars for purposes of the discussion. If an avatar is immediately available, it's returned right away in an array of avatar tuples. Otherwise, avatar retrieval is initiated and AvatarRetrieved is emitted when the avatar is available. This way, we can avoid the race between AvatarRetrieved and RequestAvatars for the same handle invoked by concurrent clients, which may result in redundant emissions of AvatarRetrieved.
> You know it's broken when 1) clients are expected to implement caching on top > of what's already done by CMs By policy, CMs are not expected to have access to persistent storage, so of course they can't cache very well. As I said above, long-term caching policy and mechanism is completely out of scope for CMs. Perhaps MC, rather than the front-end, should be the process maintaining the avatar cache, since it already makes other policy decisions.
(In reply to comment #9) > By policy, CMs are not expected to have access to persistent storage, so of > course they can't cache very well. As I said above, long-term caching policy > and mechanism is completely out of scope for CMs. I see. But then, a CM should avoid any avatar caching and hit the wire every time RequestAvatars is called. Then, the avatars are stored in whatever presence data storage there is at the client. Is it the recommended approach?
(In reply to comment #9) > By policy, CMs are not expected to have access to persistent storage, so of > course they can't cache very well. As I said above, long-term caching policy > and mechanism is completely out of scope for CMs. Agreed for long-term caching, but to solve this avatar issue 5 seconds should be enough; if using temporary files is not accepted (I see no reason why it shouldn't; we are not talking about persistent storage) you can still cache avatars in RAM (maybe the last N avatars retrieved) and free them after a few seconds.
Mardy: we appear to be in violent agreement, and what you're talking about in the last comment is fine. Gabble already holds avatars in RAM, and ptlo recently fixed it to be able to reply to multiple requests with one AvatarRetrieved signal. For the record: in the current design, other CMs SHOULD cache (at least a few) avatars during the lifetime of the Connection (to avoid repeated round trips to the server), MUST NOT cache avatars beyond the lifetime of a Connection (CMs shouldn't be stateful beyond a Connection's lifetime - that's an invariant of our API), and SHOULD NOT rely on being able to save avatars to the filesystem (they should cache them in RAM). If you still believe the API is unworkable, bring it up with Rob, but please implement it anyway. I don't think re-badgering the Avatars API takes priority over everything else API-related that we're trying to work on (notably, MC). [Capitalized words used in accordance with the RFC whose number I can never remember.]
(In reply to comment #12) > Mardy: we appear to be in violent agreement, and what you're talking about in > the last comment is fine. Gabble already holds avatars in RAM, and ptlo > recently fixed it to be able to reply to multiple requests with one > AvatarRetrieved signal. How does this work? Does it mean that the emission of the AvatarRetrieved signal is delayed by some time (how much?) to see if other RequestAvatars calls are made? We are not in violent agreement :-) My last comment was about implement caching in order to second Mikhail's idea of a LookupAvatars method. Anyway, even with ptlo's optimisation, we are incurring in some performance penalty, i.e. avatars are not advertised to the UI as soon as they could. Maybe it's a very small delay, but I'd say that in any case it's a symptom of a broken API. > For the record: in the current design, other CMs SHOULD cache (at least a few) > avatars during the lifetime of the Connection (to avoid repeated round trips to > the server), MUST NOT cache avatars beyond the lifetime of a Connection (CMs > shouldn't be stateful beyond a Connection's lifetime - that's an invariant of > our API), and SHOULD NOT rely on being able to save avatars to the filesystem > (they should cache them in RAM). Fine. > If you still believe the API is unworkable, bring it up with Rob, but please > implement it anyway. I don't think re-badgering the Avatars API takes priority > over everything else API-related that we're trying to work on (notably, MC). It's not unworkable, it's just creating performance problems: either we get too many signals, or the avatar retrieval is delayed with caching. The more I think of it, the less I can find a reason not to have the avatar data in the AvatarUpdated signal: with the current API, of those clients receiving it, there will be at least one which will call RequestAvatars, with the result that all the avatars which received the AvatarUpdated will also receive the AvatarRetrieved, unless they disconnect from the D-Bus proxy; so, why not bother them only once? This would also simplify the API a lot. Could we violently agree on this? ;-) About implementing the current API... If other CMs are not implementing the deprecated API, I guess I'll have to implement the new one in MC; but please don't consider this as a reason not to rework it, as IMHO there are lots of ways to improve it (and my suggestions might not be the best ones).
(In reply to comment #13) > The more I think of it, the less I can find a reason not to have the avatar > data in the AvatarUpdated signal Ah, I see. The crucial fact you're missing is the reason the Avatars interface was ever this complicated: many network protocols tell us the avatar has changed, with some sort of summary, checksum or timestamp (in Telepathy we encapsulate all these as a "token"), *without* actually sending us the whole avatar. The whole design of the Avatars interface is based on trying to minimize the number of times we download the entire avatar from the server. If the clients had a correct avatar cache ("correct" in the sense that it does what we propose it should do), we'd get something like this the first time: XMPP server: <presence from="bob@example.com"><avatar-sha1 sha1="deadbeefdeadbeefdeadbeef"> Gabble: AvatarUpdated(bob_handle, "deadbeefdeadbeefdeadbeef") Client: *looks in cache for "gabble/jabber/bob@example.com/deadbeefdeadbeefdeadbeef", can't find it* Client: RequestAvatars([bob_handle]) Gabble: <iq type="get"><vCard/></iq> Server eventually replies: <iq type="result"><vCard>[insert 11K of Base64 here]</vCard></iq> Gabble: AvatarRetrieved(bob_handle, [insert 8K of JPEG here], 'image/jpeg') Client: *puts in cache and displays to user* but in a later session, after you've disconnected and reconnected: XMPP server: <presence from="bob@example.com"><avatar-sha1 sha1="deadbeefdeadbeefdeadbeef"> Gabble: AvatarUpdated(bob_handle, "deadbeefdeadbeefdeadbeef") Client: *looks in cache for "gabble/jabber/bob@example.com/deadbeefdeadbeefdeadbeef"* Client: *displays to user* This does save sending 8K of binary over D-Bus (not that we really care), but that's not the goal - the goal is that it saves downloading 11K of Base64 from the server (sometimes highly significant). Depending on the protocol, you might even be able to recycle cache entries - in XMPP, if you oscillate between two avatars, the token for each one will always be the same because it's just the SHA-1 of the data - but a simplistic version, just storing the most recent avatar per account and its token, would be fine (and in fact, this simpler approach is optimal for some other protocols, where the token is just the timestamp of the last avatar change). The more I think about this, the more I think we should have an avatar-caching service that (unlike Connections) conceptually persists between sessions (in practice, probably part of MC). Perhaps we could just take the code from Empathy and bolt on a D-Bus interface, or perhaps we should think about it properly as part of account management. It should store the avatars in ~/.cache (freedesktop directory specification for the win) on PCs, but it should perhaps offload the cache onto a replaceable SD card rather than irreplaceable internal flash (if possible) on Nokia tablets.
> either we get too > many signals, or the avatar retrieval is delayed with caching. As implemented in the latest Gabble, neither is usually true, because we can generally assume that server round-trips are orders of magnitude slower than D-Bus. So events will typically go like this: Server to Gabble: <presence><I-have-a-new-vCard-its-SHA1-is/></presence> Gabble: AvatarUpdated(SHA1SHA1) Client 1: RequestAvatars Gabble to server: <iq type="get"><vCard/></iq> Gabble to client 1: OK (i.e. D-Bus reply to RequestAvatars) Client 2: RequestAvatars Client 3: RequestAvatars Gabble to client 2: OK Gabble to client 3: OK Server to Gabble: <iq type="result"><vCard>[base64 base64]</vCard></iq> Gabble: AvatarRetrieved(binary binary) If the vCard is already cached in RAM by Gabble, then it replies to RequestAvatars with an AvatarRetrieved signal instantly (causing unnecessary signals, yes), but this only occurs in the uncommon case where clients ask about vCards we already know about - and if a client does that, that probably means the client isn't implementing caching correctly, so we can't expect it to behave well ;-)
(In reply to comment #14) > This does save sending 8K of binary over D-Bus (not that we really care), but > that's not the goal - the goal is that it saves downloading 11K of Base64 from > the server (sometimes highly significant). Considering this NOTABUG.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.