Summary: | dbus-python: Segmentation fault v 1.2.4 | ||
---|---|---|---|
Product: | dbus | Reporter: | Tony Asleson <tasleson> |
Component: | python | Assignee: | Simon McVittie <smcv> |
Status: | RESOLVED MOVED | QA Contact: | D-Bus Maintainers <dbus> |
Severity: | normal | ||
Priority: | medium | ||
Version: | unspecified | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Attachments: |
Core file
All thread back trace |
Description
Tony Asleson
2016-10-31 20:40:35 UTC
From looking at the core file some more I can see that we have one thread that removed a dbus object and then was in the middle of sending a signal to announce that the object was removed (object manager). In another thread that seg. faulted we appear to be trying to reference the same dbus object that was removed by the other thread. However, I'm still at a loss how a race condition between threads could cause this, as the variable in question is on the stack. However, to mitigate this issue I've come up with a change to the dbus service where all dbus library calls are executed in the main event loop thread. With this change the service has run > 12 hours continuously in a loop without crashing. (In reply to Tony Asleson from comment #1) > From looking at the core file some more I can see that we have one thread > that removed a dbus object and then was in the middle of sending a signal to > announce that the object was removed (object manager). In another thread > that seg. faulted we appear to be trying to reference the same dbus object > that was removed by the other thread. However, I'm still at a loss how a > race condition between threads could cause this, as the variable in question > is on the stack. Interesting. Please post a backtrace from all threads, "thread apply all bt"? (I am not running Fedora 24, so the core dump is not directly useful to me.) Created attachment 127703 [details]
All thread back trace
This is from a different core file, but one that I have debug logs from service too.
(In reply to Simon McVittie from comment #2) > Interesting. Please post a backtrace from all threads, "thread apply all > bt"? (I am not running Fedora 24, so the core dump is not directly useful to > me.) I've added the bt from all threads for a different core file as a file attachment. Using this as I have some application debug logs which go with it for more context. The last bit of application logs: Nov 01 10:24:15.044272: 1454:1454 - SIGNAL: InterfacesRemoved(/com/redhat/lvmdbus1/Job/12, ['com.redhat.lvmdbus1.Job']) Nov 01 10:24:15.044629: 1454:1454 - Removing /com/redhat/lvmdbus1/Job/12 object complete! Nov 01 10:24:15.475326: 1454:1454 - SIGNAL: InterfacesAdded(/com/redhat/lvmdbus1/Job/13, {'com.redhat.lvmdbus1.Job': {'Percent': dbus.Double(0.0), 'Complete': dbus.Boolean(False), 'GetError': dbus.Struct((-1, 'Job is not complete!'), signature=dbus.Signature('(is)')), 'Result': dbus.ObjectPath('/')}}) Nov 01 10:24:15.475879: 1454:1468 - Running method: <function merge at 0x7f29fdaa6d90> with args ('com.redhat.lvmdbus1.Snapshot', dbus.String('JKQOMV-Cdvg-3sQ8-eXPB-N7Lq-xcMn-Lt9mVA'), 'mvgzobfg_vg/wwszcyej_lv_snap', dbus.Dictionary({}, signature=dbus.Signature('sv')), <lvmdbusd.job.JobState object at 0x7f29f6477b00>) Nov 01 10:24:15.481401: 1454:2368 - Background process for ['/usr/sbin/lvm', 'lvconvert', '--merge', '-i', '1', 'mvgzobfg_vg/wwszcyej_lv_snap'] is 2369 Nov 01 10:24:16.222282: 1454:1467 - Removing thread: thread job.Wait: /com/redhat/lvmdbus1/Job/12 Nov 01 10:24:16.222544: 1454:1467 - Removing thread: thread job.Wait: /com/redhat/lvmdbus1/Job/11 Nov 01 10:24:16.222732: 1454:1467 - Removing thread: thread job.Wait: /com/redhat/lvmdbus1/Job/10 Nov 01 10:24:17.857064: 1454:2368 - Background process 2369 complete! Nov 01 10:24:17.857503: 1454:1468 - load entry Nov 01 10:24:17.857961: 1454:1468 - lvmdb - refresh entry Nov 01 10:24:17.868701: 1454:1454 - SIGNAL: InterfacesRemoved(/com/redhat/lvmdbus1/Job/13, ['com.redhat.lvmdbus1.Job']) Nov 01 10:24:17.869170: 1454:1454 - Removing /com/redhat/lvmdbus1/Job/13 object complete! Nov 01 10:24:18.176634: 1454:1468 - lvmdb - refresh exit Nov 01 10:24:18.179295: 1454:1468 - SIGNAL: PropertiesChanged(/com/redhat/lvmdbus1/Pv/3, com.redhat.lvmdbus1.Pv, {'PeSegments': dbus.Array([('0', '4'), ('4', '4287')], signature=dbus.Signature('(tt)')), 'UsedBytes': dbus.UInt64(16777216), 'Lv': dbus.Array([('/com/redhat/lvmdbus1/Lv/8', [('0', '3', 'linear')])], signature=dbus.Signature('(oa(tts))')), 'FreeBytes': dbus.UInt64(17980981248), 'PeAllocCount': dbus.UInt64(4)}, []) Nov 01 10:24:18.180491: 1454:1468 - SIGNAL: PropertiesChanged(/com/redhat/lvmdbus1/Vg/4, com.redhat.lvmdbus1.Vg, {'SnapCount': dbus.UInt64(0), 'FreeCount': dbus.UInt64(17160), 'FreeBytes': dbus.UInt64(71974256640), 'LvCount': dbus.UInt64(1), 'Lvs': dbus.Array(['/com/redhat/lvmdbus1/Lv/8'], signature=dbus.Signature('o')), 'Seqno': dbus.UInt64(7)}, []) Nov 01 10:24:18.181863: 1454:1468 - SIGNAL: PropertiesChanged(/com/redhat/lvmdbus1/Lv/8, com.redhat.lvmdbus1.LvCommon, {'TargetType': dbus.Struct(('-', 'Unspecified'), signature=dbus.Signature('(ss)')), 'Attr': dbus.String('-wi-a-----'), 'VolumeType': dbus.Struct(('-', 'Unspecified'), signature=dbus.Signature('as')), 'Roles': dbus.Array(['public'], signature=dbus.Signature('s'))}, []) Nov 01 10:24:18.182213: 1454:1468 - SIGNAL: InterfacesRemoved(/com/redhat/lvmdbus1/Lv/9, ['com.redhat.lvmdbus1.LvCommon', 'com.redhat.lvmdbus1.Lv', 'com.redhat.lvmdbus1.Snapshot']) Segmentation fault (core dumped) The pid:tid is in each debug message. tid 1468 (gdb thread 5) which is broadcasting some signals and then the object removal when tid: 1454 seg. faults (gdb thread 1). -- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/dbus/dbus-python/issues/8. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.