Bug 2920 - xresources lekage
Summary: xresources lekage
Status: RESOLVED NOTOURBUG
Alias: None
Product: xorg
Classification: Unclassified
Component: Server/General (show other bugs)
Version: 6.8.2
Hardware: x86 (IA32) Linux (All)
: high major
Assignee: Xorg Project Team
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2005-04-07 04:03 UTC by John Nilsson
Modified: 2005-10-03 01:38 UTC (History)
3 users (show)

See Also:
i915 platform:
i915 features:


Attachments
xrestop -b (48.00 KB, text/plain)
2005-04-08 14:49 UTC, John Nilsson
no flags Details
lsof -U (32.12 KB, text/plain)
2005-04-08 14:49 UTC, John Nilsson
no flags Details

Description John Nilsson 2005-04-07 04:03:53 UTC
Leaving the computer on for a few days fills up the xresources with garbage and
then no more clients can connect.

see: http://bugs.gentoo.org/show_bug.cgi?id=72589 for diagnostics
Comment 1 Adam Jackson 2005-04-07 11:48:29 UTC
easy way to reproduce this:

for i in `seq 1 255` ; do
    xlogo &
done

this is pretty unacceptable.  we can bump this to 512 in the short term.  the
right fix is making the connection table dynamic, which will require surgery to
other parts of the code like resource ID generation.
Comment 2 Søren Sandmann Pedersen 2005-04-07 12:54:48 UTC
The protocol requires at least 18 contiguous bits in the resource-id-mask, and
it requires that the top three bits are not used. That leaves us with only 12
bits to distinguish clients, so 4096 clients seems like a hard limit to me
Comment 3 Søren Sandmann Pedersen 2005-04-07 13:02:10 UTC
well, 2048 actually, since 32 - 3 - 18 = 11.
Comment 4 Adam Jackson 2005-04-07 13:18:46 UTC
only if you insist on encoding the client's connection number in the resource
ID; technically we can just give the client the whole 29-bit space and maintain
a mapping from (res id, client) -> (resource) on the server side.
Comment 5 Søren Sandmann Pedersen 2005-04-07 13:33:19 UTC
XID's have to be unique across clients. QueryTree is one thing that would break,
but there are many other examples. 
Comment 6 Alan Coopersmith 2005-04-07 14:03:18 UTC
In Xsun, we didn't remove the limit, but did make it more dynamic by adding a
runtime flag to choose between a limit of 128 clients, with lots of available
resource ids per client, and 1024 clients, with fewer available resource ids,
and in Solaris 9 and later, set the default to 1024 clients.   A very small 
number of very resource hungry clients hit the smaller limit, but not many.
Comment 7 John Nilsson 2005-04-08 04:53:15 UTC
In my case the problem isn't the 255 limit. On a fresh login I use ~18 and ~30
seems be enough to fill my needs.

The problem is that just leaving the computer on, not tuching it, produces some
kind of leakage. Changing the limit to 512 would only buy me about 3 days of
uptime. 
Comment 8 Adam Jackson 2005-04-08 06:24:29 UTC
(In reply to comment #7)
> The problem is that just leaving the computer on, not tuching it, produces some
> kind of leakage. Changing the limit to 512 would only buy me about 3 days of
> uptime. 

this is almost certainly due to a leaking application rather than a leaking
server.  just to check, next time this happens run 'lsof -U' as root and attach
the output here.
Comment 9 Alan Coopersmith 2005-04-08 08:32:07 UTC
The other thing to keep in mind is while it reports as a number of clients limit,
it's actually, due to the current implementation, a limit on the id of the file
descriptors, so you can hit it with only one other client open if somehow the X
server also has all file descriptors up to 255 in use for something else (perhaps
shared memory segments, a fd leak in one of the system libraries, like the name
resolver, etc.)

I think the cygwin guys recently solved this for their code by changing the client
id from being just the fd to using a hash table to map it to the fd.
Comment 10 John Nilsson 2005-04-08 14:49:31 UTC
Created attachment 2358 [details]
xrestop -b
Comment 11 John Nilsson 2005-04-08 14:49:59 UTC
Created attachment 2359 [details]
lsof -U
Comment 12 John Nilsson 2005-05-15 05:22:57 UTC
Someone hinted that this might be due to some faulty TreuType fonts. Is that
possible?
Comment 13 John Skopis 2005-09-08 22:21:35 UTC
I have no scientific evidence of this, but I am conviced that xscreensaver,
possibly combined with electricsheep, is the cause of this bug. After disabling
xscreensaver (killall -9 xscreensaver) upon starting X I no longer experiance
these "zombie clients". 

My box has only been up for three days. However that is about how long it took
to reach the 255 client limit previously. I will update this bug report again in
a few days if I can confirm that xscreensaver is indeed causing the problem.

Please let me know if there is any other information that may be helpful.
Comment 14 John Nilsson 2005-09-09 01:01:25 UTC
At the time that I reported this bug, I belive that I was using electricsheep
also. So it is plausible.
Comment 15 John Skopis 2005-09-09 16:01:28 UTC
Well, I think we may have a test case here. I turned xscreensaver back on and
let my box sit here for ~8 hours. Now there are ~80 zombie clients. Before there
were little or no zombie clients. The screensaver I am using is electricsheep
2.6.2 for xscreensaver 4.20.

Can anyone else confirm this? 

I am going to try switching the screensaver from electricsheep to something less
intense to see if it is X, or electricsheep that is causing the problem
Comment 16 John Skopis 2005-09-11 08:55:48 UTC
I have changed my screensaver to one that is installed per default and I am no
longer leaking sessions. I am about 99% sure that this is a problem with
electricsheep and  has nothing to do w/ X.

If anyone else can confirm this maybe this bug should be resolved?
Comment 17 Seo Sanghyeon 2005-09-13 01:22:51 UTC
Debian bug #325689 confirms that electricsheep is buggy.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=325689
Comment 18 Adam Jackson 2005-10-03 18:38:25 UTC
electricsheep bug, not ours.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.