Description
Alban Crequy
2014-07-17 18:37:36 UTC
Created attachment 102994 [details] [review] [PATCH 1/3] config: add new limit: max_connections_per_cgroup Created attachment 102995 [details] [review] [PATCH 2/3] _dbus_cgroup_for_pid: new function Created attachment 102996 [details] [review] [PATCH 3/3] enforce new limit max_connections_per_cgroup Cc Lennart because this is related to systemd's cgroups. The patches rely on parsing /proc/<pid>/cgroup because SCM_CGROUP does not exist yet (there are some patches about SCM_CGROUP being discussed on kernel mailing lists though). Parsing /proc/<pid>/cgroup could be unsafe if the cgroup path contains '\n'. I wrote a patch to prevent that: http://thread.gmane.org/gmane.linux.kernel.cgroups/11676 See also similar feature request on Bug #82346 for max_connections_per_process. Created attachment 104444 [details] [PATCH 1/4] config: add new limit: max_connections_per_cgroup I rebased this series on the patches from Bug #82346. Created attachment 104445 [details] [review] [PATCH 2/4] _dbus_cgroup_for_pid: new function Created attachment 104446 [details] [review] [PATCH 3/4] enforce new limit max_connections_per_cgroup v2: - fix memory leaks in adjust_connections_for_cgroup in case of oom - fix memory release in bus_connections_setup_connection in case of oom Created attachment 104447 [details] [review] [PATCH 4/4] new tests: max_connections_per_user, max_connections_per_cgroup New tests Following discussions on the mailing list: http://lists.freedesktop.org/archives/dbus/2014-August/thread.html#16251 I will add v3 patches reading the systemd unit with sd_pid_get_unit() instead of the full cgroup path. Created attachment 104664 [details] [review] [PATCH 1/3] config: add new limit: max_connections_per_systemd_unit Created attachment 104665 [details] [review] [PATCH 2/3] enforce new limit max_connections_per_systemd_unit Created attachment 104666 [details] [review] [PATCH 3/3] new tests: max_connections_per_user, max_connections_per_systemd_unit As with Bug #82346, what is the purpose of enforcing this limit? Is it intended to be a security measure, or a guard against mistakes, or what? (In reply to comment #11) > Following discussions on the mailing list: > http://lists.freedesktop.org/archives/dbus/2014-August/thread.html#16251 > I will add v3 patches reading the systemd unit with sd_pid_get_unit() > instead of the full cgroup path. sd_pid_get_unit() has a TOCTOU race condition similar to the one discussed on many other security-sensitive D-Bus bugs: if we're asking about (say) pid 12345, you can * start a connection from pid 12345 * hand the fd over to a cooperating process, pid 23456, say * induce some other part of the system to fork() enough times that pid 12345 is recycled and the result is that we get the wrong answer. This can't be solved by any API that involves rummaging in /proc (which is what systemd does to implement sd_pid_get_unit()). Perhaps one day it will be possible to get a fd pointing to a process, and call APIs on that. (In reply to comment #15) > sd_pid_get_unit() has a TOCTOU race condition similar to the one discussed > on many other security-sensitive D-Bus bugs: Yeah, generally anything involving PIDs is racy. There has been discussion in kernel space about adding the cgroup to the socket creds, I'm not sure where that is. Comment on attachment 104665 [details] [review] [PATCH 2/3] enforce new limit max_connections_per_systemd_unit Review of attachment 104665 [details] [review]: ----------------------------------------------------------------- > It can happen on Windows or on the non-supported case where > dbus-daemon is configured to listen on TCP rather than a Unix socket. ... or on OS X, or on NetBSD, or (etc.) Comment on attachment 104665 [details] [review] [PATCH 2/3] enforce new limit max_connections_per_systemd_unit Review of attachment 104665 [details] [review]: ----------------------------------------------------------------- ::: bus/connection.c @@ +142,5 @@ > +{ > + void *val; > + int current_count; > + > + /* val is NULL is 0 when it isn't in the hash yet */ Same comments as for per-pid limits, except that if we're running under systemd, we know we're on Linux. @@ +207,5 @@ > + _dbus_assert (current_count >= 0); > + > + current_count += adjustment; > + > + _dbus_assert (current_count >= 0); Same comments about wraparound as for per-pid limits. @@ +637,5 @@ > connections->context = context; > > return connections; > > + failed_7: As for the per-pid version, this is far too many labels, and the function deserves refactoring before adding more. Comment on attachment 104666 [details] [review] [PATCH 3/3] new tests: max_connections_per_user, max_connections_per_systemd_unit Review of attachment 104666 [details] [review]: ----------------------------------------------------------------- ::: test/dbus-daemon.c @@ +567,5 @@ > &max_connections_per_process_config, setup, > + test_max_connections, teardown); > + g_test_add ("/limit/max_connections_per_user", Fixture, > + &max_connections_per_user_config, setup, > + test_max_connections, teardown); I would like the per-uid test (to be run on G_OS_UNIX only), even if we don't merge your new per-process or per-systemd-unit limits. (In reply to comment #15) > sd_pid_get_unit() has a TOCTOU race condition similar to the one discussed > on many other security-sensitive D-Bus bugs Between "I'm not convinced this is very useful" and this race condition, I'm reluctant to add this feature. I think this would be obsoleted by Bug #100344, at which point we could say "if you don't trust some of the services that share your uid, give them private D-Bus endpoints using Bug #100344" and close this WONTFIX. -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/dbus/dbus/issues/106. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.