A D-Bus client can create an "immortal" D-Bus connection in dbus-daemon. Dbus-daemon will still have the resources allocated for the connection like a file descriptor, an unique name, and possibly well-known names and match rules. An immortal connection stays alive in dbus-daemon even when the clients owning the other end of the socket are terminated. To create such an immortal connection, a client can send the first byte of a D-Bus message with the file descriptor of the connection itself attached with fdpassing (SCM_RIGHTS). Then dbus-daemon has the file descriptors of both ends of the Unix socket. The client can close(2) or exit(3) (but not shutdown(2)!). Dbus-daemon has no way to detect the close(2) because it merely reduce the socket reference counter in the kernel but another process (dbus-daemon itself) still have a reference on it so it stays alive. When the client terminates, nobody will ever finish writing the end of the D-Bus message after the first byte. So the fd will stay in the DBusMessageLoader forever. There are a few possible attacks related to immortal D-Bus connections: 1. A malicious program could create plenty of immortal D-Bus connections until the fd limit is reached. The limit could be max_completed_connections: typically 2048 on the system bus or 100000 on the session bus: <limit name="max_completed_connections">100000</limit> There is also a limit per process ("ulimit -n", typically 65536) and a global limit (/proc/sys/fs/file-max, 800796 for me). The problem here is that it's not possible to kill D-Bus clients until dbus-daemon recovers more file descriptors. When the limit is reached, even when the malicious program is terminated, I cannot get new connections: This problem is mitigated on the system bus by max_connections_per_user, typically 256 on the system bus or 100000 on the session bus. So it would still need 2048/256 = 8 users to collaborate. $ dbus-monitor Failed to open connection to session bus: Failed to determine seats of user "1000": Too many open files I've also seen an "accept busy loop" taking too much cpu: accept4(6, 0xbfff9c2c, [16], SOCK_CLOEXEC) = -1 EMFILE (Too many open files) fcntl64(-1, F_GETFD) = -1 EBADF (Bad file descriptor) clock_gettime(CLOCK_MONOTONIC, {519, 413430300}) = 0 epoll_wait(3, {{EPOLLIN, {u32=6, u64=578500642725691398}}}, 64, -1) = 1 clock_gettime(CLOCK_MONOTONIC, {519, 414742757}) = 0 accept4(6, 0xbfff9c2c, [16], SOCK_CLOEXEC) = -1 EMFILE (Too many open files) etc. Because a poll on the listening socket returns POLLIN but accept() does not work and it keeps trying again. 2. A malicious program could request some D-Bus names such as "org.freedesktop.Telepathy.MissionControl5" and then make the connection immortal. MissionControl will not be able to start without restarting the session: killing the malicious program will not release the well-known name. An immortal connection is not necessarily unusable: a malicious process could create two immortal D-Bus connections by sending the two fds attached to the same byte in one of the two connections. In this way, one of the connections could be used to impersonate MissionControl and send/receive messages, and the other connection stays blocked after sending the first byte. 3. A malicious program could register some match rules and make the connection immortal and attempt to make dbus-daemon memory usage balloon with Bug #33606. 4. A user could prevent root to umount a partition without killing the system bus. A malicious program can pass an open fd on the mount to dbus-daemon on the system bus through the immortal connection and then exit. The administrators will use "lsof" to find the processes preventing the umount; they will find the process dbus-daemon for the system bus. 5. Similar to the umount problem: a malicious program could create an exclusive lock with flock() or fcntl() on some databases and then pass the fd to the system bus. == Possible ideas to make dbus-daemon more resistant against this kind of attacks: - timeout when receiving a pending fd in an incomplete msg (fds should not stay in the loader for too long) - keep track of the numbers of pending fds in DBusMessageLoaders. Don't accept() on the listening socket when there are too many fds pending. - don't accept fds spread across a message: if a fd was attached to the first byte of the message, the next bytes should not have a fd attached. It needs some research to see if the "un-glue fds messages from different skbs" is Linux-specific or not, see http://lxr.free-electrons.com/source/net/unix/af_unix.c#L2051 - disconnect connections sending a message with a fd but without the appropriate header. - only accept message with fds attached when received a sdk big enough to check the header to see if it is supposed to contain fds.
Mitigations: (In reply to comment #0) > This problem is mitigated on the system bus by max_connections_per_user, > typically 256 on the system bus or 100000 on the session bus. So it would > still need 2048/256 = 8 users to collaborate. Those 8 users could already cooperate to DoS the system bus; the only difference here is that a sysadmin cannot clean up the situation by killing their processes. > 2. A malicious program could request some D-Bus names such as > "org.freedesktop.Telepathy.MissionControl5" If a malicious program can do this, you've already lost. If your bus is a security boundary, then you need to limit who is allowed to own particular names. > 3. A malicious program could register some match rules and make the > connection > immortal and attempt to make dbus-daemon memory usage balloon with Bug > #33606. They can already do that; solutions welcome. Again, the difference here is that a sysadmin cannot mitigate the situation by killing their processes. > 4. A user could prevent root to umount a partition without killing the system > bus. A malicious program can pass an open fd on the mount to dbus-daemon on > the > system bus through the immortal connection and then exit. The administrators > will use "lsof" to find the processes preventing the umount; they will find > the process dbus-daemon for the system bus. Mitigation: don't let people access partitions you want to unmount Mitigation: if you're sufficiently desperate, ptrace the dbus-daemon and close the fd that way :-)
The fds must be between the first and second bytes in a Unix socket message. I don't think we should accept them anywhere else. That doesn't solve the problem, though. The malicious connection could send the first byte, the fds, maybe a second byte and then close its side of the connection. Now the daemon is simply stuck. I think the only mitigation possible is to timeout messages with pending fds. It might be possible to use getsockname() and getpeername() to identify whether the pending message has passed a file descriptor to the D-Bus socket itself and only time that one out. However, I think we should always do the timing out. Otherwise, it's possible for multiple connections to cooperate to exhaust the daemon's file descriptor limit by not closing their connections.
(In reply to comment #2) > The fds must be between the first and second bytes in a Unix socket message. > I don't think we should accept them anywhere else. > > That doesn't solve the problem, though. The malicious connection could send > the first byte, the fds, maybe a second byte and then close its side of the > connection. Now the daemon is simply stuck. I agree that we can make things a bit stricter. It will not solve this specific issue but it can solve similar issues. For example, if a client sends a message byte by byte with a new fd attached on each byte... > I think the only mitigation possible is to timeout messages with pending > fds. I wonder what would be the default timeout for that? 25 seconds? For big messages sent in several sendmsg/recvfrom iterations, I would like if SIGSTOP and gdb on the sending process don't break things. > It might be possible to use getsockname() and getpeername() to identify > whether the pending message has passed a file descriptor to the D-Bus socket > itself and only time that one out. However, I think we should always do the > timing out. Otherwise, it's possible for multiple connections to cooperate > to exhaust the daemon's file descriptor limit by not closing their > connections. It can work in some scenario but not all of them. dbus-daemon would need to check whether the received fd is a Unix socket, then use recvfrom(...MSG_PEEK) to get all packets from the socket queue and recursively unpack file descriptors and inspect them. The problem is that the malicious sender could send more packets at any point in the recursive pile of packets and fds. So at the time dbus-daemon checks it could be fine, and then it will become not fine. It could be a time of check / time of use problem. We can check that regularly, each time the timeout about pending fds in DBusMessageLoader is triggered, but it will not help if two different connections (e.g. one on the system bus, one on another bus) get passed to each other.
(In reply to comment #3) > (In reply to comment #2) > > I think the only mitigation possible is to timeout messages with pending > > fds. > > I wonder what would be the default timeout for that? 25 seconds? For big > messages sent in several sendmsg/recvfrom iterations, I would like if > SIGSTOP and gdb on the sending process don't break things. I'm going to say a little higher. The bus-enforced reply timeout is 5 minutes when enabled, so either we do 5 minutes or half of that. If we had the ability to change the fd-passing protocol, I'd change it to work as follows: - send the file descriptors after the header, but before the data payload (instead of between first and second bytes) - add a header entry that indicates the number of file descriptors being passed - keep a daemon-wide fixed-size list of file descriptors being passed That way, we know whether we need to allocate and fd array before receiving them. If the array doesn't have enough room, the connection is suspended until we find room -- and we implement the timing out, so there will eventually be room. Since we know whether a message is passing file descriptors, messages without them will still go through. The possible attack vectors I can think in this are that multiple connections could cooperate to DoS any file descriptor passing by keeping the daemon list always full. At least connections that aren't exchanging file descriptors would still work.
(In reply to comment #4) > If we had the ability to change the fd-passing protocol, I'd change it to > work as follows: > - send the file descriptors after the header, but before the data payload > (instead of between first and second bytes) > The possible attack vectors I can think in this are that multiple > connections could cooperate to DoS any file descriptor passing by keeping > the daemon list always full. At least connections that aren't exchanging > file descriptors would still work. Maybe if we instead require the file descriptors to be passed with the *last* byte of the message, we won't have a timeout problem -- the message is complete then. Actually, it might be possible to implement this if we just add a new capability in the header and negotiate that. After a period of grace, we disable the old extension, so old clients sending file descriptors between the first and second bytes will not be able to pass them anymore.
(In reply to comment #4) > - send the file descriptors after the header, but before the data payload > (instead of between first and second bytes) File descriptors are not attached to a specific byte of the stream but to a skb (socket buffer). When the client calls sendmsg() with 2 iovecs (one for the header and one for the body of the message as libdbus does) and 1 fd, it is still one skb containing 1 buffer + 1 fd in kernel. And when dbus-daemon receives data with recvmsg(), the buffer received might not have the same boundaries as the skb because it is SOCK_STREAM and not SOCK_DGRAM. Dbus-daemon can receive a buffer coming from 2 skbs with 1 fd for example, and the information about which offset or which skb the fd was attached to is lost from dbus-daemon's perspective. > - add a header entry that indicates the number of file descriptors being > passed It already exists: header field UNIX_FDS (UINT32). > - keep a daemon-wide fixed-size list of file descriptors being passed At the moment, the list is purely in libdbus, in DBusMessageLoader attached to the DBusConnection. > That way, we know whether we need to allocate and fd array before receiving > them. If the array doesn't have enough room, the connection is suspended > until we find room -- and we implement the timing out, so there will > eventually be room. Since we know whether a message is passing file > descriptors, messages without them will still go through. When recvmsg() is called without a msg_control buffer (or with a buffer too small for all file descriptors), the kernel cannot place the file descriptors and they get closed and it's not possible to recover them. Dbus-daemon could know they were discarded with msg_flags=MSG_CTRUNC but it does not help. So it must always call recvmsg() with a control buffer for receiving fds. For this to work, it would require dbus-daemon to call recvmsg with the correct sizes of buffer so two skbs don't get aggregated, instead of calling with a fixed sized buffer of 2kB (max_bytes_read_per_iteration). It would impact the performances negatively. > The possible attack vectors I can think in this are that multiple > connections could cooperate to DoS any file descriptor passing by keeping > the daemon list always full. At least connections that aren't exchanging > file descriptors would still work. Existing connections that aren't exchanging file descriptors are already working, but not new ones. So it is not as bad as it could be. (In reply to comment #5) > Maybe if we instead require the file descriptors to be passed with the > *last* byte of the message, we won't have a timeout problem -- the message > is complete then. I think it would work, but it would require clients to send the message is two separate syscalls sendmsg(): the first with all the bytes of the message except the last byte, and the second with the last byte + the fds. So it would require changes in dbus libraries used by clients: libdbus, QtDBus. > Actually, it might be possible to implement this if we just add a new > capability in the header and negotiate that. After a period of grace, we > disable the old extension, so old clients sending file descriptors between > the first and second bytes will not be able to pass them anymore. I like the idea but it seems to be a big change in dbus-daemon + in client libraries. So I would prefer if the timeout idea was enough. I will attach patches about the timeout idea.
Created attachment 103210 [details] [review] [PATCH 1/4] config: add new limit: pending_fd_timeout
Created attachment 103211 [details] [review] [PATCH 2/4] DBusConnection: implements _dbus_connection_get_pending_fds_count
Created attachment 103212 [details] [review] [PATCH 3/4] DBusConnection: implements _dbus_connection_set_pending_fds_function
Created attachment 103213 [details] [review] [PATCH 4/4] bus: enforce pending_fd_timeout
This looks good in principle, although I'd like to re-review it to be sure.
I think these are fine. There are some trivial conflicts between Attachment #103210 [details] and the one on Bug #82820, so when we do the security advisory, we should send out linearized patches.
Created attachment 106175 [details] [review] [PATCH 2/4 v2] DBusConnection: implements _dbus_connection_get_pending_fds_count From: Alban Crequy <alban.crequy@collabora.co.uk> This will allow the bus to know whether there are pending file descriptors in a DBusConnection's DBusMessageLoader. [fix compilation on platforms that do not HAVE_UNIX_FD_PASSING -smcv] --- Cross-building with mingw while doing a "test all the configurations" run for a release-candidate: /home/smcv/src/fdo/dbus-1.8/dbus/dbus-message.c: In function '_dbus_message_loader_get_pending_fds_count': /home/smcv/src/fdo/dbus-1.8/dbus/dbus-message.c:4515:16: error: 'DBusMessageLoader' has no member named 'n_unix_fds' return loader->n_unix_fds; I think the solution (wrap it in #ifdef) is trivial enough to not really need review, but if someone could glance at it and say yes, that would be reassuring.
(In reply to comment #13) > I think the solution (wrap it in #ifdef) is trivial enough to not really > need review, but if someone could glance at it and say yes, that would be > reassuring. I've not tested it but the wrapping looks good to me.
Created attachment 106191 [details] [review] [backport to 1.6] config: add new limit: pending_fd_timeout --- Just some conflicts in cmake/bus/dbus-daemon.xml.
Created attachment 106192 [details] [review] [backport to 1.6, 2/4] DBusConnection: implements _dbus_connection_get_pending_fds_count --- Some conflicts with the change that fixed thread-safety-by-default.
Created attachment 106193 [details] [review] [backport to 1.6, 3/4] DBusConnection: implements _dbus_connection_set_pending_fds_function --- Likewise. Patch 4/4 "enforce pending_fd_timeout" does not need backporting, just apply the same one.
Comment on attachment 106191 [details] [review] [backport to 1.6] config: add new limit: pending_fd_timeout Review of attachment 106191 [details] [review]: ----------------------------------------------------------------- This probably requires fixing Bug #81053, Bug #82820, Bug #80919 first, because they all touch the list of limits.
CVE-2014-3637 has been allocated for this vulnerability.
Fixed in 1.8.8, raising embargo
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.