Created attachment 98555 [details] ls -l output I used journald in both Fedora and ArchLinux, on a btrfs filesystem, and I noticed on two occasions (once for each distribution) that journald can become very slow, to the point where the boot proess is taking a very long time (more than 10 minutes). After getting a shell, I found that I couldn't access the log at all. Or it took a very long time (a few minutes for `journalctl -fa`). Removing (or moving away) all files in /var/log/journal fixed the problem and the system was as fast as the first day. I still have the old files for diagnosis if you want me to try something with them. Here is the output of --verify and --disk-usage $ journalctl -D /var/log/journal/e92df66897d24a499a6b6ecf7e6a30c2- --disk-usage Journals take up 4.0G on disk. The output of --verify command (that was successful) and the list of files is attached.
Created attachment 98556 [details] journalctl --verify command output
the problem is btrfs fragmentation. try using autodefrag mount option and/or manual defrag.
btw, you can diagnose this using: sudo filefrag /var/log/journal/*/*
journald in git will now automatically set the NOCOW flag for journal files, and issues btrfs defrag calls when it archives journal files. With that in place journald hsould provide similar performance on btrfs as on other file systems.
ok, nice! but is it safe? wouldn't the journal be damaged on sudden power failure, then? (and logs are usually the most important thing to have when anything goes wrong)
(In reply to Radek Podgorny from comment #5) > ok, nice! but is it safe? wouldn't the journal be damaged on sudden power > failure, then? (and logs are usually the most important thing to have when > anything goes wrong) With the NOCOW flag set data integrity guarantees on btrfs degrade to the same ones made by ext3/4 which should be pretty much OK. Also, the journal does its own checksumming, and hence should be capable of detecting corruptions (though not fixing them). Given that we have a "mostly append + update ptrs at front" write pattern the expected data loss if some writes are missing or written in the wrong order should be limited: either the pointers are missing but the appended data written, in which case the appended data will simply not be considered but everything else is OK. Or the data is missing and the pointers set up, for which case we have careful checks in place. All in all I think we should be OK.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.