Bug 24377

Summary: [0.9] Add well-known bundles and our own caps to cache, and serve disco responses from it
Product: Telepathy Reporter: Will Thompson <will>
Component: gabbleAssignee: Telepathy bugs list <telepathy-bugs>
Status: RESOLVED FIXED QA Contact: Telepathy bugs list <telepathy-bugs>
Severity: normal    
Priority: medium Keywords: patch
Version: unspecified   
Hardware: Other   
OS: All   
URL: http://git.collabora.co.uk/?p=user/wjt/telepathy-gabble-wjt.git;a=shortlog;h=refs/heads/improve-caps-cache-0.9
Whiteboard:
i915 platform: i915 features:
Bug Depends on: 24344    
Bug Blocks:    

Description Will Thompson 2009-10-07 08:48:57 UTC
+++ This bug was initially created as a clone of Bug #24344 +++

We shouldn't need to disco the Google Talk clients' many many caps bundles: either we already know what they mean, or we don't care about them. We also should cache what the caps hashes we publish mean, to stop us discoing anyone else (possibly even ourself!) with the same hash. Finally, we should remember our past caps hashes, and respond to disco requests for them, both to be a better-behaved client and to work around an iChat bug.

A branch fixing this was merged to 0.8 (commit id b64fc7f8); it needs porting to 0.9.
Comment 1 Alban Crequy 2009-10-07 16:25:40 UTC
(In reply to comment #0)
> Finally, we should remember our past caps hashes, and respond to disco requests for them, both to be a
> better-behaved client and to work around an iChat bug.

Just curious, what is this iChat bug? One can consider it is better-behaving to reply <item-not-found/> instead of old caps:
http://mail.jabber.org/pipermail/standards/2008-May/018712.html
http://mail.jabber.org/pipermail/standards/2008-May/018713.html
Comment 2 Will Thompson 2009-10-08 01:16:31 UTC
(In reply to comment #1)
> Just curious, what is this iChat bug?

If you return an error in response to a caps disco request, it asks you again.

> One can consider it is better-behaving to
> reply <item-not-found/> instead of old caps:
> http://mail.jabber.org/pipermail/standards/2008-May/018712.html
> http://mail.jabber.org/pipermail/standards/2008-May/018713.html

I don't think that's *better* behaviour, just theoretically acceptable behaviour. Given that it's not exactly hard to remember our past hashes (we want to cache them anyway, so we don't disco other identical Gabbles!), I think responding to older hashes is better behaviour.

(Hrm. I wonder what Gabble does if it makes a caps disco request, then the hash changes, then the first one returns...)
Comment 3 Will Thompson 2009-10-08 01:19:33 UTC
(In reply to comment #2)
> (Hrm. I wonder what Gabble does if it makes a caps disco request, then the hash
> changes, then the first one returns...)

Ah, it does the right thing, because of the caps serial number.
Comment 4 Will Thompson 2009-10-08 02:06:55 UTC
Porting the branch to 0.9 wasn't too hard: see my 'improve-caps-cache-0.9' branch.
Comment 5 Simon McVittie 2009-10-12 04:35:40 UTC
+  /* FIXME: we should satisfy any waiters for this node now, but I think that
+   * can wait till 0.9.
+   */

Please file a bug and reference it in this comment. Otherwise, ++
Comment 6 Will Thompson 2009-10-19 07:00:04 UTC
(In reply to comment #5)
> +  /* FIXME: we should satisfy any waiters for this node now, but I think that
> +   * can wait till 0.9.
> +   */
> 
> Please file a bug and reference it in this comment.

Done (bug #24619) and merged:

commit e435bafcde2fb77a43698ae49c316821eb41f571
Author: Will Thompson <will.thompson@collabora.co.uk>
Date:   Mon Oct 19 14:56:50 2009 +0100

    Note bug number for satisfying waiters with our own hash

commit 5edd11b52b0d2341ba648cbd87e33cb597bb8164
Merge: 9904163 e435baf
Author: Will Thompson <will.thompson@collabora.co.uk>
Date:   Mon Oct 19 14:57:50 2009 +0100

    Merge branch 'improve-caps-cache-0.9'
    
    Fixes fd.o bug #24377.
    
    Reviewed-by: Simon McVittie <simon.mcvittie@collabora.co.uk>

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.