Bug 77013 - RFE: journald to send logs via network
Summary: RFE: journald to send logs via network
Status: RESOLVED FIXED
Alias: None
Product: systemd
Classification: Unclassified
Component: general (show other bugs)
Version: unspecified
Hardware: Other All
: medium normal
Assignee: systemd-bugs
QA Contact: systemd-bugs
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-04-03 15:43 UTC by Duncan Innes
Modified: 2014-09-02 14:11 UTC (History)
2 users (show)

See Also:
i915 platform:
i915 features:


Attachments

Description Duncan Innes 2014-04-03 15:43:33 UTC
Functionality that exists in rsyslogd to send logs to remote locations on the network should be replicated in jornald.

This would avoid the requirement for external programs shipping journalctl entries.

The option to transmit data over encrypted connections would be beneficial.

The option to transmit data as JSON would also be beneficial.

The ability to add custom tags to JSON output would assist larger organisations.

Use case:

It's hard to think of an organisation that doesn't consolidate it's logs in some way on a centralised system.  Whilst there may be external processes (e.g. rsyslog) that can fill this gap, a native feed from journald would be the cleanest method from the end-user perspective.

Encrypted transmission would satisfy the security aspect of the transmission.

The ability to transmit as JSON would fit in with the way that many centralised log storage mechanisms are going (especially the open source ones).

The ability to insert custom tags & data to JSON would allow larger organisations to group data by system type without having to parse data and amend it before inserting it into the database of choice.  (e.g. all clustered servers could add a "cluster_name": "$CLUSTER_NAME" field to allow logs to be viewed by cluster rather than individual hosts.)  Some of this can be done by tasks/filters on the central system, but the ability for clients to add custom tags reduces the overall load & complexity of the system.
Comment 1 David Strauss 2014-04-03 19:25:55 UTC
This feature already exists in systemd 212:
http://www.freedesktop.org/software/systemd/man/systemd-journal-remote.html
Comment 2 Zbigniew Jedrzejewski-Szmek 2014-04-03 19:31:45 UTC
systemd-journal-remote is just the receiver side. The sending counterpart still hasn't left my local machine. So strictly speaking, this bug shouldn't be closed yet.

Also, there's a suggestion of json formatting and some extra tags. I don't think we need/want that, but maybe the reported explain the intended use case a bit more.

> The ability to add custom tags to JSON output would assist larger organisations.

So what exactly is the usecase here, and why aren't the _MACHINE_ID + _BOOT_ID fields enough?
Comment 3 David Strauss 2014-04-03 20:05:04 UTC
Zbigniew, you're right. I misread the systemd-journal-remote man page and thought it did support push.

At my company we use journal2gelf [1] to push messages. Of course, that pushes in GELF format, which is for Logstash aggregation, not journal aggregation. I'd be concerned about the performance implications of push aggregation to the journal right now.

[1] https://github.com/systemd/journal2gelf
Comment 4 Duncan Innes 2014-04-03 21:34:52 UTC
Thanks for keeping this open.  I was confused about what part did the sending and what did the receive.

As for the output formats and extra tags - here goes.

The use case for JSON formatting is to send logs to alternative aggregators (such as Logstash as mentioned in comment #3).  The ability to receive logs in separated format rather than log lines makes it much easier for these systems to parse entries and stick them in whatever database is being used.

The use case for extra tags I would say is similar to Puppet/Foreman hostgroups or classes.  Systems know quite a lot about themselves which the log aggregator is going to have a hard time figuring out.

Client systems know if they are dev, test, uat or production.
Client systems know if they are in the DMZ (potentially)
Database servers know that they are database servers
Web servers know that they are web servers
and so on . . .

If each client can add some tags that provide context to the log entries, searches through logs can be made very much more useful.

I could search for all IPTABLES denials on my web servers.
I could search for all failed login attempts on my DMZ servers.

Strictly speaking, the log comes from a single machine, but being able to group these machines arbitrarily (as happens naturally on a large estate) will allow an extremely powerful context search on the log database.

Why not get the aggregator/parser/indexer to add these fields?  These machines will not necessarily know all the details that the client might want to add.  The client already knows these details, or can have them set via whatever config management tool is being used.

Overall system loads will also be reduced by clients having a config entry that (for example) hard codes "cluster": "WebApp3" to be added to the log entries rather than having the aggregator performing repeated calculations or lookups on whatever LDAP, node classifier or other method is used.

I don't mean to unduly extend the features of log shipping, but allowing a couple of output formats and some extra fields to be pushed would be a big benefit to large scale system users.  Especially when the first point of inspection of aggregated logs is potentially a script/automated process rather than a SysAdmin.
Comment 5 Duncan Innes 2014-04-03 21:47:10 UTC
Going further, it would be possible to see the use for doing some parsing of log lines on the client.

IPTABLES log entries triggered to parse and populate the fields for IN, OUT, MAC, SRC, DST, PROTO, TTL, DPT, SPT etc. rather than just all on the log message line.

I'm struggling to think of other good examples (been parsing & searching IPTABLES logs all day and it's now late).

Just a thought (perhaps more of a random thought), but I don't think functionality in this direction would go unused either.
Comment 6 Duncan Innes 2014-04-03 21:59:02 UTC
Final question: is there failover/load balancing ability on the cards for the remote sending?

i.e. setting up 2 log destinations, possibly with round robin or plain failover when 1 destination is out of action?
 
Would journald be capable of remembering the last successfully sent entry in event of all destinations being offline?  Rather than buffering output to disk in event of network failure, just point to the last sent log entry and restart from there when the destinations become available.

Too much for one bugzilla?  Split out into 2 or more?

Duncan
Comment 7 Zbigniew Jedrzejewski-Szmek 2014-04-05 01:09:53 UTC
(In reply to comment #3)
> Zbigniew, you're right. I misread the systemd-journal-remote man page and
> thought it did support push.
The man page could probably use some polishing :)

systemd-journal-remote supports pulling, but this support is rather primitive, and is certainly not enough for sustained transfer of logs.

> At my company we use journal2gelf [1] to push messages. Of course, that
> pushes in GELF format, which is for Logstash aggregation, not journal
> aggregation. I'd be concerned about the performance implications of push
> aggregation to the journal right now.
Journald is fairly slow because it does a lot of /proc trawling for each message. When receiving messages over the network, all possible data is already there, so it should be reasonably fast. I expect HTTP and especially TLS to be the bottlenecks, not the journal writing code. Running benchmarks is on my TODO list.

(In reply to comment #4)
> The use case for JSON formatting is to send logs to alternative aggregators
> (such as Logstash as mentioned in comment #3).  The ability to receive logs
> in separated format rather than log lines makes it much easier for these
> systems to parse entries and stick them in whatever database is being used.
Adding json support to systemd-journal-upload (the sender part, which is
currently unmerged) would probably be quite simple... But for this to be useful,
it has to support whatever protocol the receiver uses. I had a look at the
logstash docs, and it seems that json_lines codec should work. I'm not
sure about the details, but it looks like something that could be added
without too much trouble. Maybe some interested party will write a patch :)

> The use case for extra tags I would say is similar to Puppet/Foreman
> hostgroups or classes.  Systems know quite a lot about themselves which the
> log aggregator is going to have a hard time figuring out.
OK. This sounds useful (and easy to implement).

(In reply to comment #6)
> Final question: is there failover/load balancing ability on the cards for
> the remote sending?
So far no.

> i.e. setting up 2 log destinations, possibly with round robin or plain
> failover when 1 destination is out of action?
>  
> Would journald be capable of remembering the last successfully sent entry in
> event of all destinations being offline?  Rather than buffering output to
> disk in event of network failure, just point to the last sent log entry and
> restart from there when the destinations become available.
journald is not directly involved. It's a program totally separate from journald
and it simply another journal client. It keeps the cursor of last successfully
sent entry in a file on disk, and when started, by default, uploads all entries
after that cursor and then new ones as they come in.

> Too much for one bugzilla?  Split out into 2 or more?
No, it's fine.
Comment 8 Duncan Innes 2014-06-10 09:00:38 UTC
Sorry - should have come back to you on your last comment.

Coding is not something I am particularly gifted in, so whilst I'm happy to give it a go, the result will probably be of a lower quality than you'd like.  I'll have a look at the code though.  Any pointers as to where to begin my search?

From the Logstash end, there's a Jira ticket for a journald shipper: https://logstash.jira.com/browse/LOGSTASH-1807

I've commented a few times.  Hopefully there can be some cooperation between these tickets to find a good solution.  My view is that journald should implement the 'global' solution that can push to whoever is listening.  The 3rd party aggregators can then write plugins (if necessary) to listen and pull this data stream into their systems.

Did your code get off your laptop yet?
Comment 9 Cole Gleason 2014-06-19 21:32:31 UTC
I would be really interested in seeing systemd-journal-upload! Is there anything I can do to help get that completed?
Comment 10 Zbigniew Jedrzejewski-Szmek 2014-07-16 03:22:15 UTC
(In reply to comment #8)
> Did your code get off your laptop yet?
I pushed the code to systemd master today (commit http://cgit.freedesktop.org/systemd/systemd/commit/?id=3d090cc6f34e59 and surrounding ones).

(In reply to comment #7)
> > At my company we use journal2gelf [1] to push messages. Of course, that
> > pushes in GELF format, which is for Logstash aggregation, not journal
> > aggregation. I'd be concerned about the performance implications of push
> > aggregation to the journal right now.
> Journald is fairly slow because it does a lot of /proc trawling for each
> message. When receiving messages over the network, all possible data is
> already there, so it should be reasonably fast. I expect HTTP and especially
> TLS to be the bottlenecks, not the journal writing code. Running benchmarks
> is on my TODO list.
Well, I was quite wrong here. It turns out that writing to the journal *is* the slow part. I'll probably publish some benchmarks on the mailing list tomorrow, but, essentially, writing to the journal is the most significant part, followed by TLS overhead. But if compression is turned on, things are much worse, because XZ compression was very slow. This patchset was delayed because I worked on adding LZ4 compression to the journal, which in turned caused other people to tweak XZ settings, improving compression speed greatly without significant loss of compression ratio. So in general, things improved on all fronts. With LZ4 compression, compression overhead should be less significant, since the speed is in the 500-1500 MB/s range, depending on the compressibility of data.


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.