Summary: | [task] Protocol: Integer signedness | ||
---|---|---|---|
Product: | Wayland | Reporter: | John Kåre Alsaker <john.kare.alsaker> |
Component: | wayland | Assignee: | Wayland bug list <wayland-bugs> |
Status: | RESOLVED NOTABUG | QA Contact: | |
Severity: | normal | ||
Priority: | medium | ||
Version: | unspecified | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Bug Depends on: | |||
Bug Blocks: | 48976 |
Description
John Kåre Alsaker
2012-10-02 12:53:50 UTC
The uint vs int args in the protocol are all carefully chosen. The choice of uint vs int is not about which values the argument may take, but more a consideration of what kind of operations we're going to do on it. Adding and comparing int and uint values result in subtle overflow bugs. For example, you'll often be doing something like x2 - x1 < width, and if width is unsigned and x1 and x2 are signed and x2 - x1 < 0, the x2 - x1 < width comparison will fail. In turn you often compare width to stride, so taking all this to its natural conclusion, width and stride should be signed, although they'll never take negative values. The fact that a variable may never be negtive is no different to if it was always in the 10 - 20 range, and you wouldn't pick a different type for that either. As a rule of thumb, we only use uint for two things: bitfields and enums. I suggest we get rid of as much of uint as possible then. Replacing it with flags, enum and time types. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.