Bug 63218 - systemd-udevd hangs with umount loop ISO
Summary: systemd-udevd hangs with umount loop ISO
Status: RESOLVED INVALID
Alias: None
Product: systemd
Classification: Unclassified
Component: general (show other bugs)
Version: unspecified
Hardware: x86-64 (AMD64) Linux (All)
: medium normal
Assignee: systemd-bugs
QA Contact: systemd-bugs
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-04-06 23:34 UTC by Alex Xu (Hello71)
Modified: 2013-04-09 22:21 UTC (History)
0 users

See Also:
i915 platform:
i915 features:


Attachments

Description Alex Xu (Hello71) 2013-04-06 23:34:32 UTC
# cd /tmp
# mkdir ubuntu
# modprobe loop
# mount -o ro,loop,uid=dnsmasq,gid=dnsmasq ~/Downloads/ubuntu-12.10-desktop-amd64.iso ubuntu
<access some files>
# umount ubuntu
^C^C^C^Z^Z^\^\^\<alt-tab>
# echo w > /proc/sysrq-trigger
# journalctl
...
systemd-udevd[68]: worker [1399] /devices/virtual/block/loop0 timeout; kill it
systemd-udevd[68]: seq 1114 '/devices/virtual/block/loop0' killed
kernel: SysRq : Show Blocked State
kernel:   task                        PC stack   pid father        
kernel: umount          D ffff88022fd11140     0  1398   1397 0x00000004    
kernel:  ffff8801edaf9e00 0000000000000086 ffff8801e8979200 0000000000013ea0
kernel:  ffff8801add2ffd8 ffff8801add2ffd8 ffff8801add2ffd8 ffff8801edaf9e00
kernel:  ffff8801bb8cc018 ffff8801bb8cc018 ffff8801bb8cc01c ffff8801edaf9e00
kernel: Call Trace:
kernel:  [<ffffffff813bc449>] ? __mutex_lock_slowpath+0xc9/0x140
kernel:  [<ffffffff813bbfd3>] ? mutex_lock+0x23/0x40
kernel:  [<ffffffffa057327b>] ? loop_clr_fd+0x27b/0x470 [loop]
kernel:  [<ffffffffa05734e0>] ? lo_release+0x70/0x80 [loop]
kernel:  [<ffffffff8110380c>] ? __blkdev_put+0x17c/0x1d0
kernel:  [<ffffffff813bbfc6>] ? mutex_lock+0x16/0x40
kernel:  [<ffffffff810d4f25>] ? deactivate_locked_super+0x55/0x80
kernel:  [<ffffffff810ef843>] ? sys_umount+0xa3/0x3d0
kernel:  [<ffffffff813bf192>] ? system_call_fastpath+0x16/0x1b
kernel: systemd-udevd   D ffff88022fc11140     0  1399     68 0x00000004    
kernel:  ffff8801bb785400 0000000000000086 ffff8801bb781800 00000030706f6f6c
kernel:  ffff8801add83fd8 ffff8801add83fd8 ffff8801add83fd8 ffff8801bb785400
kernel:  ffff8801bb8cc018 ffff8801bb8cc018 ffff8801bb8cc01c ffff8801bb785400
kernel: Call Trace:
kernel:  [<ffffffff813bc449>] ? __mutex_lock_slowpath+0xc9/0x140
kernel:  [<ffffffff813bbfd3>] ? mutex_lock+0x23/0x40
kernel:  [<ffffffff811039f3>] ? __blkdev_get+0x43/0x420
kernel:  [<ffffffff811040a0>] ? blkdev_get+0x2d0/0x2d0
kernel:  [<ffffffff81103f66>] ? blkdev_get+0x196/0x2d0
kernel:  [<ffffffff811040a0>] ? blkdev_get+0x2d0/0x2d0
kernel:  [<ffffffff810d110e>] ? do_dentry_open.isra.18+0x1ee/0x280
kernel:  [<ffffffff810d11b5>] ? finish_open+0x15/0x20
kernel:  [<ffffffff810e014a>] ? do_last.isra.67+0x2fa/0xbf0
kernel:  [<ffffffff810dd508>] ? link_path_walk+0x68/0x830
kernel:  [<ffffffff810e0afb>] ? path_openat.isra.68+0xbb/0x450
kernel:  [<ffffffff810b6432>] ? handle_mm_fault+0x342/0x360
kernel:  [<ffffffff810e10c4>] ? do_filp_open+0x44/0xb0
kernel:  [<ffffffff810ecce2>] ? __alloc_fd+0x42/0x110
kernel:  [<ffffffff810d23d3>] ? do_sys_open+0xf3/0x1e0
kernel:  [<ffffffff813bf192>] ? system_call_fastpath+0x16/0x1b

Don't know if this is a util-linux bug (almost certainly not), loop bug (probably not), isofs bug (maybe) or systemd bug (seems like it), but something's definitely not right here.
Comment 1 Kay Sievers 2013-04-09 14:48:53 UTC
Maybe this?

http://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git/commit/?h=for-linus&id=c2fccc1c9f7c81700cbac2120a4ad5441dd37004

If you run a recent kernel with the broken change, please close this bug.

Thanks!
Comment 2 Alex Xu (Hello71) 2013-04-09 22:21:25 UTC
Was running from torvalds master which appears to have included 8761a3dc1f07b163414e2215a2cadbb4cfe2a107; 	c2fccc1c9f7c81700cbac2120a4ad5441dd37004 is now in. I'll close this on the basis that that was most likely the cause.

Thanks!


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.