Bug 88340 - Wierd Segfault in sd_rtnl_message_unref (libnss_myhostname.so.2 by sshd )
Summary: Wierd Segfault in sd_rtnl_message_unref (libnss_myhostname.so.2 by sshd )
Status: RESOLVED FIXED
Alias: None
Product: systemd
Classification: Unclassified
Component: general (show other bugs)
Version: unspecified
Hardware: x86-64 (AMD64) Linux (All)
: medium critical
Assignee: systemd-bugs
QA Contact: systemd-bugs
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-01-12 21:13 UTC by svenne
Modified: 2016-06-07 10:18 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments

Description svenne 2015-01-12 21:13:41 UTC
On Arch X64 using 218-1 (first packaging of 218) I have run into the
following wierd problem.

When trying to connect to a ssh server running dualstack (both ipv4 and
ipv6) by ipv6, ssh segfaults when I have loaded the full ipv4 bgp
routing table (~500k+ routes). IPv4 connections works for some reason,
and Ipv6 recovers if I kill the routing daemon (bird).

The stack trace of the core-file starts with

Stack trace of thread 515:
#0  0x00007f48334a3dd5 _int_free (libc.so.6)
#1  0x00007f4834a1e62a sd_rtnl_message_unref (libnss_myhostname.so.2)
#2  0x00007f4834a1e657 sd_rtnl_message_unref (libnss_myhostname.so.2)

And continues with that line (#1 and #2) until frame 63.

I have looked in src/libsystemd/sd-rtnl/rtnl-message.c and have two
observations (my C is very rusty so feel free to correct me).

Line 589, shouldn't the line
    if (m && REFCNT_DEC(m->n_ref) <=3D 0) {

be

    if (m && REFCNT_DEC(m->n_ref) >=3D 0) {

(I.e. greater-than-equal instead of less-than-equal)

Also, perhaps a test of whether m->next is equal to m on line 597....
Comment 1 Daniel Mack 2015-06-05 12:49:58 UTC
I that still reproducible with v220 or upstream git?
Comment 2 Lennart Poettering 2016-06-07 10:18:41 UTC
Fixed by:

https://github.com/systemd/systemd/pull/3455


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.