It seems that xfwp is not IPv6-aware - quick search with GNU grep shows that no traces of IPv6-specific code: -- snip -- % grep -r INET . ./io.c: assert(temp_sockaddr_in.sin_family == AF_INET); ./transport.c: socket(AF_INET, SOCK_STREAM, 0)) < 0) ./transport.c: rem_sockaddr_in.sin_family = AF_INET; ./transport.c: if ((*server_connect_fd = socket(AF_INET, SOCK_STREAM, 0)) < 0) -- snip -- This issue renders xfwp unuseable in IPv6-only environments... ;-(
Sorry - for some reason xfwp got skipped in the IPv6 conversion work I did. I don't remember any reason for that, though I don't know if anyone actually uses it anymore given the prevalence of ssh and it's X tunnelling features.
(In reply to comment #1) > Sorry - for some reason xfwp got skipped in the IPv6 conversion work I did. I > don't remember any reason for that, though I don't know if anyone actually uses > it anymore given the prevalence of ssh and it's X tunnelling features. xfwp still has several advantages including the fact that it doesn't suck - the ssh forwarding always adds a very high latency penalty on the clients (similar to a 10base5 line running over tons of hubs) and running multiple clients over a ssh connections makes it even worse. And xfwp can sit on a NAT and/or firewall for multiple users where ssh is just per-user with all the related side-effects (e.g. bound to a ssh session).
(In reply to comment #1) > Sorry - for some reason xfwp got skipped in the IPv6 conversion work I did. See http://www.mail-archive.com/devel@xfree86.org/msg02160.html
(In reply to comment #2) > xfwp still has several advantages including the fact that it doesn't suck - the > ssh forwarding always adds a very high latency penalty on the clients (similar > to a 10base5 line running over tons of hubs) not really. i ran a quick test of XSync latency over various transports on my machine (2xP3 700MHz) and got the following numbers: Direct connection, PF_UNIX transport: 0.125ms Direct connection, PF_INET transport: 0.302ms lbxproxy: 0.411ms xfwp: 0.633ms ssh2 forwarding, no compression: 0.900ms ssh2 forwarding, compression: 1.044ms (for reference, 'ping 127.0.0.1' gave a latency of 0.135ms) this is pretty close to best case latency, since we're not touching the network at all. once the bits are on the network they travel at the same speed and it's just a matter of how many bits you move... 0.3ms doesn't really count as a very high latency penalty. yes, there's processing delay for adding compression and encryption, but given a modern CPU this is balanced by the lowered packet count. > and running multiple clients over a > ssh connections makes it even worse. again, not sure how this could be true. since ssh multiplexes all X connections over a single stream they share the benefits of compression and lowered packet count, as well as Nagling to the channel bandwidth faster. none of which is to say xfwp shouldn't have ipv6 support of course.
Sorry about the phenomenal bug spam, guys. Adding xorg-team@ to the QA contact so bugs don't get lost in future.
Mass closure: This bug has been untouched for more than six years, and is not obviously still valid. Please reopen this bug or file a new report if you continue to experience issues with current releases.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.