Testing with an SDL2 game, e.g. neverputt / neverball:
If Xwayland is running, SDL2 uses the X11 backend by default. Depending on the compositor, this results in stuttering and/or tearing.
Forcing the SDL2 Wayland backend with SDL_VIDEODRIVER=wayland , things are smooth, there is no stuttering or tearing.
I guess this is mostly due to Xwayland using the generic Present extension support, which just copies from the back buffer to the front buffer. There might be more issues though.
This issue is pretty bad for the vast majority of games which don't have direct Wayland support yet.
I am not very familiar with the Present extension support on the server side...
Xwayland has the frame listener that tells when the Wayland compositor is ready to handle another surface commit request, maybe the Present extension support should be based on this?
Is it with a compositor or hardware in particular?
I tried with neverball under gnome-shell/wayland on intel hardware in 1680x1050 and could not see any stuttering nor tearing using the x11 SDL backend.
Could it be the fact that in both weston and mutter we still always draws the buffer an extra time via OpenGL, while on X11 the fullscreen client window "bypasses" the compositor, avoiding that composition? That will cause noticeably worse performance. Could try to enable the hard-coded-disabled weston hw plane optimization path to see how much of a difference it makes.
I'm seeing the problem with gnome-shell and weston. It's much more noticeable on this laptop than on my desktop development machine. Both using a Kaveri APU, so maybe it's just less noticeable on more powerful systems.
Bug 99687 indicates that the problem might be noticeable with sway as well. Actually, I'm not sure how it could be avoided by any compositor, since Xwayland always sends the same single buffer to the compositor and copies new content to that.
Btw. I'm not quite sure, but it might be so that Weston sends wl_buffer.release events too early for buffers that were directly scanned out with KMS. One should keep that possibility in mind, when investigating and testing.
< mannerov> MrCooper: I had long ago a branch with XWayland Present support. Someone could revive it
< mannerov> I'm not interested in it myself
(In reply to Pekka Paalanen from comment #5)
> Btw. I'm not quite sure, but it might be so that Weston sends
> wl_buffer.release events too early for buffers that were directly scanned
> out with KMS.
Given that Xwayland currently only ever uses a single buffer, are there any wl_buffer.release events involved?
(In reply to Michel Dänzer from comment #7)
> (In reply to Pekka Paalanen from comment #5)
> > Btw. I'm not quite sure, but it might be so that Weston sends
> > wl_buffer.release events too early for buffers that were directly scanned
> > out with KMS.
> Given that Xwayland currently only ever uses a single buffer, are there any
> wl_buffer.release events involved?
Wait, it does that with EGL-based buffers too? Even though fullscreen EGL/GLX X11 clients use double or more buffering?
I thought it only did it with wl_shm buffers. Or, only when not able to scan out from X11 client buffers? Can that ever even happen btw.? I thought it could, but maybe that's just NVIDIA stuff then? Or does it require an X11 compositing manager to actually set up the direct scanout by... fiddling with pixmaps or something?
If that's true, then I don't think there should be any wl_buffer.release events being sent, but if one were to implement double-buffering or more in Xwayland, then there would be, and then you might see this issue.
I meant this comment more as future note anyway, since fixing things in Xwayland might cause unexpected results - this could be one explanation.
(In reply to Pekka Paalanen from comment #8)
> Wait, it does that with EGL-based buffers too? Even though fullscreen
> EGL/GLX X11 clients use double or more buffering?
Yes. Xwayland always uses a single buffer for the pixmap which contains the storage of a window. The fallback implementation of the Present extension currently used by Xwayland just copies from the back buffer provided by the client to that single buffer.
Hello, I hit this problem as well, when I was working on layered / direct scanout of client buffers in KWin.
I was pointed to this bug report and I took it as an inspiration for a GSoC project, which got also added to the Xorg GsoC idea page as "Multi-buffer Present in XWayland".
To my knowledge the project got accepted and I'll start working on the issue soon, so I changed the Assignee to myself.
If you want to talk to me, you can find me most of the time on IRC in the #wayland channel with the nick romangg.
Fix landed in xorg-server 1.20.0.