Bug 99800

Summary: Some way to inject automated libinput events?
Product: Wayland Reporter: rhendric <ryan.hendrickson>
Component: libinputAssignee: Wayland bug list <wayland-bugs>
Status: RESOLVED WONTFIX QA Contact:
Severity: enhancement    
Priority: medium CC: peter.hutterer
Version: unspecified   
Hardware: All   
OS: All   
Whiteboard:
i915 platform: i915 features:

Description rhendric 2017-02-13 22:02:36 UTC
I can think of three broad use cases for having a way to generate fake input events for applications: productivity automation, GUI testing, and mapping input devices that libinput doesn't/shouldn't support, like joypads. For some forms of input, like keyboard key presses, the evdev interface is sufficient for this; automation tools can create a new evdev device and provide key presses and button clicks. But for input events that libinput creates out of lower-level evdev data--I'm thinking of multitouch gestures and scrolling in particular--simulating these events from evdev data is not only a major pain, but also relies on the internal details of how libinput decides to interpret pressure and motion this week.

I don't know enough about the Wayland arch design to have an opinion on how this should work, other than that there should be some way for programs, possibly with elevated privilege, to generate events that libinput can forward verbatim to Wayland/any other libinput host, especially events like smooth scrolling and gestures.

(Background: I mentioned three use cases above, but the third is the one that really brought me here. I like using a gamepad as an input device in X, and while I don't expect libinput to handle gamepads in general (given the explicitly low-configuration design that you guys seem to favor), I do want to be able to write a daemon that listens for joystick input and generates libinput events. And particularly--this is something that xf86-input-joystick doesn't do--I'd love for one of the analog thumbsticks to generate smooth scrolling events like two-finger swipes do, instead of discrete events like old mouse wheels.)
Comment 1 rhendric 2017-02-13 22:08:27 UTC
Oh, and I would be happy to attempt a patch for this if you agree it's a feature worth considering and can offer a recommendation for how the communication should work--drop a FIFO in /var/run somewhere, use D-Bus, whatever else.
Comment 2 Peter Hutterer 2017-02-13 23:29:38 UTC
libinput already has two backends (udev and path-name based), adding other backends is relatively trivial. But the design of libinput is not well-suited to having *additional* backends. So while it would be easy to write a backend that reads events from e.g. dbus and forwards them as libinput events, it wouldn't be trivial to have this in addition to the udev/path backends. That causes another problem: it's the caller that decides which backend to use (compositors use the udev one, xorg uses the path one), so adding a new independent backend wouldn't get you very far.

To get this working, you'd need libinput to handle your event source as a new device, but that again doesn't necessarily play with other bits (e.g. no udev_device would exist for that). 

So much for the technical bits. Doable but messy and the non-messy approach wouldn't help much.

But really, libinput strikes me as the wrong layer to implement this anyway. libinput's purpose is to take diverse hardware and process it into a known set of events that are directly usable. in the automation/gui testing case you don't need this, and in the joystick case you have something else doing that. So libinput would just be a router for existing events - you don't need libinput for that [1]

For GUI testing at least I can tell you that libinput events aren't good enough anyway. Relative pointer events are notoriously state-dependent, making proper GUI testing almost impossible. Same likely goes for automation - if evdev is too low-level/unpredictable then libinput events suffer from the same issue here.

So the best solution to approach this would be to write some sort of external daemon (dbus, most likely) that makes the events available and provides enough information for a caller to hook onto the right bits when needed. e.g. a GUI testing interface should only be listened to when testing. Get that working, then get the compositors to listen to that interface and DTRT with the events. Yes, that's a lot of work, but with a long-term view that's the right technical solution.

[1] yeah, I know, getting it into libinput would make it available in all users of libinput, but that's not a good technical justification.
Comment 3 rhendric 2017-02-14 01:34:47 UTC
So libinput is a hardware abstraction layer, yes, and an automation source isn't hardware. But as an abstraction layer, libinput owns those abstractions. libinput defines what it means to be a LIBINPUT_EVENT_POINTER_AXIS, and why that's not the same thing as an evdev REL_WHEEL or REL_HWHEEL. You might be saying that you want libinput's interface to be effectively promoted to a standard, so that non-libinput producers like, say, libautomation, can produce the same events and compositors can reasonably be expected to combine the two. But it seems to me like a bad technical solution to have a producer of LIBINPUT_EVENT_POINTER_AXIS events and a producer of LIBAUTOMATION_EVENT_POINTER_AXIS events and for mutual clients to be forever anxious about or adapting to divergences between them--am I misinterpreting your proposal?

If instead, you permit libinput's purpose to be taking diverse *input sources* and processing them into a known set of events, it looks like the right place to inject additional LIBINPUT_EVENT_POINTER_AXISes--yes, because all users would get them, but also because there would be a single source of libinput events from the perspective of the compositor, which keeps the burden of writing compositors low, which I thought was one of the goals.

Unless, of course, you also want to make the argument that this functionality isn't part of the core set of what users expect from a graphical desktop environment, and that's why you want me to convince each individual compositor to include it instead of bundling it with libinput. If that's your position, then I don't know--sure, it isn't necessary, but with all the xdotools and ldtps and whatever other automation tools and frameworks exist for X out there, it seems like people want to use them, which means that if Wayland compositors are going to take over from X, they're going to want to have a way to do that sort of thing too... which they can implement themselves, or get from libinput.

---

Since you didn't insta-WONTFIX this (thanks!), I'll ask some technical questions too, just in case the above sways you a little. :-)

Thanks for your comments about the backends; I agree, adding a new backend doesn't sound like it would help. It's not quite as clear to me why creating a new device would be messy--to the one issue you pointed out, it looks like libinput_device_get_udev_device is already permitted to return NULL for some devices, so are there other reasons why creating a non-udev-backed automation device wouldn't play well with things?

About libinput events not being good enough: why would GUI testing/automation tools have to use relative pointer events? Isn't LIBINPUT_EVENT_POINTER_MOTION_ABSOLUTE available, or does that not work for some reason? (For my personal needs, evdev is too low-level for the very specific reason that I want to generate fractional POINTER_AXIS events, not integral REL_WHEEL events, and I definitely don't want to do it by emitting a string of ABS_MT_POSITION_Y and ABS_PRESSURE and whatever else I'd have to provide to spoof libinput into just giving me the POINTER_AXIS I want. Otherwise, it's fine.)
Comment 4 Peter Hutterer 2017-02-14 09:36:16 UTC
You're not misinterpreting it, that's pretty much the suggestion. But one of the things we disagree on is that I think it's required to have different e.g. AXIS events since anything from an automated source is slightly different than the hardware. My argument is that the decision whether the two are the same or not should be done in the compositor where we have more contextual information about what to handle and where.

libinput's API is stable and well-documented (and not that big if you leave out the configuration bits), so even emulating the same behaviour from some other library would be trivial. But above all, pushing this to the compositor really makes it possible to be a lot more flexible in what data to accept. libinput is severely restricted there, not least because it doesn't have any config files to instruct it what to use (so you'd be relying on the compositor there again).

From personal experience, it certainly feels like every shortcut I've ever taken with input drivers has come back and bitten me in an inconvenient place. That's one reason I'm very cautious.

---

Technical bits: the udev device may be null, correct. but I think callers started 'relying' on extra information. e.g libinput won't tell you whether a device is a mouse or touchpad but you can get that information from udev. Not having a udev device is not a killer, but affects how your device is handled.

Absolute events won't work either because you cannot rely on windows being in the correct place at any time. Works better than relative, but it's no guarantee and any change to the compositor may break things. Again - an argument for being in the compositor where you can influence things.

Regarding the specific issue, look at Bug 92772, that may help.
Comment 5 rhendric 2017-02-15 18:56:34 UTC
Hm, okay. Your caution seems justified, even if I still think that there's a use case for generating events that do look the same to the compositor as the events that libinput/hardware would generate. But I'll focus my efforts on implementing a hook in a compositor instead of at libinput and see whether the concerns you're highlighting come into sharper focus for me.

Bug 92772 doesn't seem to be relevant, because that's about changing how libinput generates events from hardware that it already recognizes, whereas libinput explicitly ignores joysticks and gamepads (and I don't think I would advocate changing that, given that I would want a joystick-to-input layer to be very dynamic and configurable--it makes more sense to me for an independent process to handle such nonstandard input hardware, a process that users can start and stop and reconfigure and replace without disrupting their compositor session).
Comment 6 Peter Hutterer 2017-02-15 22:30:05 UTC
I linked to Bug 92772 because it may be useful if you go the uinput route of converting events to others. That gives you the ability to send more fine-grained REL_WHEEL events. Sorry, should've been more precise there.
Comment 7 Peter Hutterer 2017-02-17 06:03:24 UTC
fwiw, closing this for now until we need to resurrect this discussion

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.