We should look over the protocols and ensure that all int arguments have well defined behavior when below zero and change them to uint otherwise. We should also ensure that uint arguments would never be required to be below zero.
The uint vs int args in the protocol are all carefully chosen. The choice of uint vs int is not about which values the argument may take, but more a consideration of what kind of operations we're going to do on it. Adding and comparing int and uint values result in subtle overflow bugs. For example, you'll often be doing something like x2 - x1 < width, and if width is unsigned and x1 and x2 are signed and x2 - x1 < 0, the x2 - x1 < width comparison will fail. In turn you often compare width to stride, so taking all this to its natural conclusion, width and stride should be signed, although they'll never take negative values. The fact that a variable may never be negtive is no different to if it was always in the 10 - 20 range, and you wouldn't pick a different type for that either. As a rule of thumb, we only use uint for two things: bitfields and enums.
I suggest we get rid of as much of uint as possible then. Replacing it with flags, enum and time types.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.