Bug 21883

Summary: Server crashes on GetScreenInfo by client using old randr protocol
Product: xorg Reporter: Marien Zwart <marien.zwart>
Component: Server/GeneralAssignee: Xorg Project Team <xorg-team>
Status: RESOLVED DUPLICATE QA Contact: Xorg Project Team <xorg-team>
Severity: normal    
Priority: medium    
Version: unspecified   
Hardware: Other   
OS: All   
Whiteboard:
i915 platform: i915 features:

Description Marien Zwart 2009-05-22 13:51:47 UTC
coincoin161 on #xorg reported that a simple xcb-based test program making a GetScreenInfo request crashed the server, hitting the "RRGetScreenInfo bad extra len" FatalError call in ProcRRGetScreenInfo in randr/rrscreen.c. I think he was using an 1.6 server, but I think the problem still exists in master.

His program made an xcb_randr_query_version call claiming client major and minor versions of 0, then called xcb_randr_get_screen_info (on the root window of screen 0). Fixing his program to pass major version 1 and minor version 2 made the crash go away. If I read the code right (after half a dozen embarrassingly incorrect guesses) this is because the calculated extraLen always includes space for refresh rates, but this space is only used if the client claims to support them (the has_rate check in the loop). If the client does not the comparison of space allocated versus space used below fails and the server aborts (not caring that the buffer was not overrun, just not fully used).

I think a fix here is to set rep.nrateEnts to 0 if has_rate is false, but I have not tested this at all nor checked if the resulting reply is interpreted correctly by randr clients using the old protocol.
Comment 1 Julien Cristau 2009-05-22 17:04:41 UTC

*** This bug has been marked as a duplicate of bug 21861 ***

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.