when journalbeat logs to journal, journalbeat can consume its own logs which can lead to a loop.
For instance journalbeat logs journalbeat successfully published 1 events to journal which is then read by journalbeat.
6.6.1 (tarball)
Ubuntu 16.04
#...
ExecStart=/opt/journalbeat/current/journalbeat run -e
StandardOutput=journal
StandardError=journal
#...
#...
journalbeat.inputs:
- paths: []
#...
# drop log lines from journalbeat using processor
processors:
- drop_event:
when:
equals:
process.name: "journalbeat"
Mar 08 14:15:56 somehostname journalbeat[10995]: 2019-03-08T14:15:56.833Z INFO [input] input/input.go:133 journalbeat successfully published 1 events {"id": "2d121d9b-6458-40ef-a9b5-f642c0218916"}
drop_event processor prevents these from being sent to logstash or elastic search but still really fills up the logs.-e flag and journal logging for stderr/out.#...
ExecStart=/opt/journalbeat/current/journalbeat run
#StandardOutput=journal
#StandardError=journal
#...
journalbeat.inputs > include_matches to filter input. Presumably log lines filtered at input stage won't be logged about (compare to log events dropped by processor).journalbeat.inputs > paths combined with journal config changes to filter which journal files/directories are used as input (idea being you could configure journalbeat to use journal but log to a different journal/file which could be excluded from input).exclude_matches that match unit journalbeat and are below a certain severity (ie exclude everything from journalbeat that isn't WARN or ERROR). However if journalbeat is erroring trying to send logs to logstash and then logs the error...that error presumably is "severe" and so journalbeat would try to send that error resulting is another error/loop condition. In my case I stopped journalbeat from logging to journal to avoid this completely. However that decision means that if journalbeat is erroring...that needs to be detected though other monitoring as journalbeat logs aren't collected. If error condition is detected (for instance absence of log data from given host for >X time) then someone has to go look at the journalbeat logs on that box.
This issue is due to the logging configuration of Beats on systemd. See more on setting logging: https://www.elastic.co/guide/en/beats/journalbeat/master/running-with-systemd.html#_customize_systemd_unit_for_journalbeat
Any progress here?
This just comes with insane defaults out of the box imho. I also ran into the same issue that journalbeat is forwarding its own log messages.
My workaround is basically the following:
logging.level: warning
processors:
- drop_event:
when:
equals:
systemd.unit: "journalbeat.service"
I would love to see exclude_matches option as suggested by Matt Probert.
Most helpful comment
Any progress here?