I have an issue with journal corruptions and need to know what is the accepted way to deal with them.
Invalid object contents at 4968272█████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 48%
File corruption detected at /firstname.lastname@example.org~:4968272 (of 4972544, 99%).
FAIL: /email@example.com~ (Bad message)
The only way to deal with journal corruptions, currently, is to ignore them: when a corruption is detected, journald will rename the file to <something>.journal~, and journalctl will try to do its best reading it. Actually fixing journal corruptions is a hard job, and it seems unlikely that it will be implemented in the near future.
Note that the corruption reporting has become more verbose since this bug was reported, and some "corruptions" that were reported as such are now reported as unusual, but acceptable events (happened after 204, so the changes are included in systemd-205).
Yupp, journal corruptions result in rotation, and when reading we try to make the best of it. they are nothing we really need to fix hence.
Since this bugyilla report is apparently sometimes linked these days as an example how we wouldn't fix a major bug in systemd:
Journal files are mostly append-only files. We keep adding to the end as we go, only updating minimal indexes and bookkeeping in the front earlier parts of the files. These files are rotated (rotation = renamed and replaced by a new one) from time to time, based on certain conditions, such as time, file size, and also when we find the files to be corrupted. As soon as they rotate they are entirely read-only, never modified again. When you use a tool like "journalctl" to read the journal files both the active and the rotated files are implicitly merged, so that they appear as a single stream again.
Now, our strategy to rotate-on-corruption is the safest thing we can do, as we make sure that the internal corruption is frozen in time, and not attempted to be "fixed" by a tool, that might end up making things worse. After all, in the case the often-run writing code really fucks something up, then it is not necessarily a good idea to try to make it better by running a tool on it that tries to fix it up again, a tool that is necessarily a lot more complex, and also less tested.
Now, of course, having corrupted files isn't great, and we should make sure the files even when corrupted stay as accessible as possible. Hence: the code that reads the journal files is actually written in a way that tries to make the best of corrupted files, and tries to read of them as much as possible, with the the subset of the file that is still valid. We do this implicitly on every access.
Hence: journalctl implicitly does on read what a theoretical journal file fsck tool would do, but without actually making this persistent. This logic also has a major benefit: as our reader gets better and learns to deal with more types of corruptions you immediately benefit of it, even for old files!
File systems such as ext4 have an fsck tool since they don't have the luxury to just rotate the fs away and fix the structure on read: they have to use the same file system for all future writes, and they thus need to try hard to make the existing data workable again.
I hope this explains the rationale here a bit more.
Is there any option for journalctl available to delete all known corrupted logfiles, or maybe some config option to regularly (every few months or so) remove corrupted journal files?
In case not, could one of those be added please?
(In reply to Florian Hubold from comment #4)
> Is there any option for journalctl available to delete all known corrupted
> logfiles, or maybe some config option to regularly (every few months or so)
> remove corrupted journal files?
> In case not, could one of those be added please?
Why? What's the usecase? Why would you want to throw-away the good parts in the journal files?
Again, the journalctl tries hard to salvage all data from the unit files, should there be a corrupted one, and it does this implicitly, all the time, when showing them. In the best case you hence never notice that something might have gotten corrupted.
I think he means that the rotated corrupted files don't get deleted on max log age hit. Is that so? I may be wrong