Adding a bug so this doesn't get lost. For the background see:
In short, the server ignores a client even if it has time to respond to it. This causes delays of up to 30ms to the ignored client, which is stuck in a call such as xquerypointer.
Archives got rebuilt/renumbered so the old link is invalid.
New link seems to be:
Also relevant: http://lists.x.org/archives/xorg-devel/2013-October/038135.html
This is mostly fixed now, by:
Author: Keith Packard <firstname.lastname@example.org>
Date: Wed Jan 22 11:01:59 2014 -0800
dix: Praise clients which haven't run for a while, rather than idle clients
Author: Adam Jackson <email@example.com>
Date: Tue Nov 5 10:20:04 2013 -0500
smartsched: Tweak the default scheduler intervals
We will now select() afresh every time we switch clients, and rely on the scheduler scores to pick the most-deserving client each time through. It's _possible_ that we should instead try to drain every ready fd every time we call select(), but I'm not totally convinced, and it would require a lot of algorithmic changes to our main loop to maintain both fairness and responsiveness. If we were a webserver it might make sense to maximize throughput like that, but probably we should consider select() cheap enough that we can just rely on the scheduler scores.