Summary: | systemd-nspawn --network-bridge breaks networking in container's host | ||
---|---|---|---|
Product: | systemd | Reporter: | Ed Tomlinson <edt> |
Component: | general | Assignee: | systemd-bugs |
Status: | RESOLVED NOTOURBUG | QA Contact: | systemd-bugs |
Severity: | normal | ||
Priority: | medium | CC: | |
Version: | unspecified | ||
Hardware: | Other | ||
OS: | All | ||
See Also: | https://bugs.freedesktop.org/show_bug.cgi?id=85485 | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: | network setup and commands run to start kvm & nspawn |
Description
Ed Tomlinson
2014-10-26 00:30:46 UTC
Created attachment 108422 [details]
network setup and commands run to start kvm & nspawn
note this also occurs if I use --network-veth or if I have kvm create a second interface and I pass it to the container with --network-interface=eth0 Hi Ed, I've a similar setup and made the following observation: After starting the container instance the host0 interface has two ips assigned to it: 1) 169.254.* 2) the ip from my dhcp server gathered over the bridge interface After deleting the 169.254.* address from the interface (ip addr del ...) the container can reach the network and is reachable as well. If that works for you, maybe you can made the following test: download e.g. the arch iso (88M) several times from an mirror via wget and check the checksum. In my setup I get corrupted files :-( BTW: I've both effects with running a container directly on the physical box as well as running in an KVM instance. Best regards, Leo Leo, The 169.x.x.x and dhcp get assigned due to the stuff in /usr/lib/systemd/network which I have disabled in the container and kvm - I want static assignments. There are three systems involved physical grover <-> kvm host <-> nspawn dev where grover runs the kvm called host and the kvm runs the nspawned container dev The problem with non root users (using ssh) occurs between the users in grover and users in the kvm when the nspawned container is active using network-bridge, network-veth or network-interface (I did not test just private-network). It does not mater if the network between host and dev is configured or not. Just starting dev breaks the network between grover and host which is NOT nice at all. Before it gets asked I have tried this with and without firewalls enabled Ed btw in comment 2 it should have read --network-interface=eth1 When using network-interface I configure eth0 in the kvm host, and pass eth1 to the nspawn dev and configure it there. When dev is active I see the problem. Another observation. If I create three interfaces when starting kvm eg. -netdev bridge,id=hn0 -device virtio-net-pci,netdev=hn0,id=nic0 \ -netdev bridge,id=hn1 -device virtio-net-pci,netdev=hn1,id=nic1 \ -netdev bridge,id=hn2 -device virtio-net-pci,netdev=hn2,id=nic2 \ and pass eth1 to nspanw dev and pass eth2 to nspawn prd and configure all interfaces on the same network then communication (as root) is possible prd, host & grover or btween dev, host & grover but not between dev & prd. Think in all cases I am seeing some side effects of network namespaces. In any case it makes isolation of the interfaces networks used by dev & prd almost useless. I realize that nspawn is not a security solution and that its isolation very probably can be easily hacked. However, it would be nice to be able to partition the networks - it makes the setup of programs running in them simpler. Ed If use the following commands in the kvm (host) instance: ip netns add dev ip link add veth0 type veth peer name host0 ip link set dev veth0 master br1 ip link set host0 netns dev ip netns exec dev systemd-nspawn --link-journal=guest -bqD /jail/dev & ip netns exec dev ip link set lo up ip netns exec dev ip addr add 2001:4830:1100:xxxx::a/64 dev host0 ip netns exec dev ip -6 route add default via 2001:4830:1100:xxxx::2 ip netns exec dev ip link set host0 up ip link set veth0 up Then the network acts as expected and ssh connections to the address assigned to br1 continue to work for both root and user clients. This is what I understand --network-bridge-br1 should be doing. Also not that the netns is setup by systemd-nspawn it does not show up in ip netns list. I'll create a new parameterized service using the above setup for now but this really should just work. Ed And when I create the service the problem reoccurs... Unit] Description=Container %i After=network-online.target Wants=network-online.target [Service] Environment="host=veth0" "peer=vhost0" "bridge=br1" "addr=2001:4830:1100:xxxx::a/64" "target=2001:4830:1100:xxxx::2" EnvironmentFile=/etc/nspawn.d/%i ExecStartPre=/usr/bin/ip netns add %i ExecStartPre=/usr/bin/ip link add $host type veth peer name $peer ExecStartPre=/usr/bin/ip link set %i $host master $bridge ExecStartPre=/usr/bin/ip link set $peer netns %i ExecStartPre=/usr/bin/ip netns exec %i ip link set lo up ExecStartPre=/usr/bin/ip netns exec %i ip addr add $addr dev $peer ExecStartPre=/usr/bin/ip netns exec %i ip -6 route add default via $target ExecStartPre=/usr/bin/ip netns exec %i ip link set $peer up ExecStartPre=/usr/bin/ip link set $host up ExecStartPre=/usr/bin/ip netns exec %i nft -f /peer/%i/etc/nftables.conf ExecStart=/usr/bin/ip netns exec %i /usr/bin/systemd-nspawn --quiet --boot --keep-unit --link-journal=guest --directory=/peer/%i ExecStop=/usr/bin/machinectl poweroff %i ExecStopPost=/usr/bin/ip netns del %i KillMode=mixed Type=notify RestartForceExitStatus=133 SuccessExitStatus=133 [Install] WantedBy=multi-user.target PS. I hate problems like this one. <grin> Okay something makes sense (maybe. I've though so before too). I looks like nftables has problems with namespaces. The tables list just fine in both namespaces (eg. nft list shows what is set in the namespace) but access is lost when you load nftables in the nspawned instance - its not getting something right. Closing - this is not a systemd bug. Ed |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.