compton is a composite manager that uses the damage event to detect when it is necessary to repaint the screen. it creates a damage on each window, with event report type "NonEmpty", every time an event is received, it clears the damage with DamageSubtract, and repaint the screen. Sometimes, the X server seems to buffer the damage event, thus prevent any further event from being generated. Causing the screen to freeze until some other event flushes the buffer. Using RawRectangles seems to solve this problem, but that causes excessive damage reports to be sent, sometimes resulting in lag, because apparently WriteToClient can be expensive with some GPU drivers.
After some investigation, the problem might be in the client library. Packet capturing shows the damage event is indeed sent by the server, but not processed by the client. So I added assert(XEventsQueued(ps->dpy, QueuedAlready) == 0); before calling select() in compton. Freezes still happen, but this assert never fails. This seems to indicate a bug in Xlib.
In the packets capture, I found out that X server assigned the same sequence number to a reply and an event. I think this is not supposed to happen.
(In reply to Yuxuan Shui from comment #2) > In the packets capture, I found out that X server assigned the same sequence > number to a reply and an event. I think this is not supposed to happen. Sorry, this is irrelevant. Looks like, int xcb_in_read, an event is read. But then xcb_poll_event returned NULL in Xlib poll_for_event
(In reply to Yuxuan Shui from comment #3) > > Looks like, int xcb_in_read, an event is read. But then xcb_poll_event > returned NULL in Xlib poll_for_event This is not true. The event is received in xcb_poll_for_reply64 (which calls xcb_in_read). At this point in Xlib poll_for_response, poll_for_event is already called and won't be tried again. Result in the event being left in xcb's queue.
Moved to https://gitlab.freedesktop.org/xorg/lib/libx11/issues/79
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.