Beats: Filebeat stops harvesting logs

Created on 11 Sep 2019  ·  9Comments  ·  Source: elastic/beats

  1. Filebeat harvested logs correctly
  2. Restarted logstash
  3. New logs appear in Kibana
  4. After some time, logs are not being harvested from a docker container, although new log entries are written to file

Note that logstash correctly receives logs from other containers harvested by filebeat.
Filebeat harvester didn't recover even after a few hours.

Filebeat configuration:

output.logstash:
  hosts: ["logstash:5044"]
setup.kibana.host: http://kibana:5601
filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

Docker compose service:

web:
    image: app:latest
    expose:
      - 80
    labels:
      co.elastic.logs/multiline.pattern: '^[[:space:]]+(at|\\.{3})\\b|^Caused by:'
      co.elastic.logs/multiline.negate: false
      co.elastic.logs/multiline.match: after
      app: web
    environment:
      ELASTIC_APM_SERVER_URLS: http://apm:8200

My docker container that is not being harvested has id: 3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c

This container was also restarted at 2019-09-11 15:19:03,983.
Last log entry correctly processed has timestamp 2019-09-11 15:19:37,235.
Next log entry with timestamp 2019-09-11 15:20:15,276 and next ones are missing.

Filebeat relevant log:

2019-09-11T15:19:57.988Z        INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":28150,"time":{"ms":120}},"total":{"ticks":54510,"time":{"ms":150},"value":54510},"user":{"ticks":26360,"time":{"ms":30}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":13},"info":{"ephemeral_id":"17c964bc-009e-4840-89b6-4b9c511cdf14","uptime":{"ms":15900511}},"memstats":{"gc_next":8939216,"memory_alloc":4663776,"memory_total":6966547888},"runtime":{"goroutines":93}},"filebeat":{"events":{"active":-2,"added":114,"done":116},"harvester":{"open_files":5,"running":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":116,"batches":13,"total":116},"read":{"bytes":78},"write":{"bytes":31559}},"pipeline":{"clients":8,"events":{"active":0,"published":114,"total":114},"queue":{"acked":116}}},"registrar":{"states":{"current":10,"update":116},"writes":{"success":13,"total":13}},"system":{"load":{"1":1.19,"15":0.58,"5":0.86,"norm":{"1":0.2975,"15":0.145,"5":0.215}}}}}}
2019-09-11T15:20:04.554Z        INFO    input/input.go:149      input ticker stopped
2019-09-11T15:20:04.554Z        INFO    input/input.go:167      Stopping Input: 9894227653855818847
2019-09-11T15:20:04.554Z        INFO    log/harvester.go:274    Reader was closed: /var/lib/docker/containers/3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c/3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c-json.log. Closing.
2019-09-11T15:20:27.995Z        INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":28160,"time":{"ms":10}},"total":{"ticks":54610,"time":{"ms":100},"value":54610},"user":{"ticks":26450,"time":{"ms":90}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"17c964bc-009e-4840-89b6-4b9c511cdf14","uptime":{"ms":15930517}},"memstats":{"gc_next":5878912,"memory_alloc":7557664,"memory_total":6986384152,"rss":-2998272},"runtime":{"goroutines":82}},"filebeat":{"events":{"added":31,"done":31},"harvester":{"closed":1,"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":30,"batches":12,"total":30},"read":{"bytes":72},"write":{"bytes":17806}},"pipeline":{"clients":7,"events":{"active":0,"filtered":1,"published":30,"total":31},"queue":{"acked":30}}},"registrar":{"states":{"current":10,"update":31},"writes":{"success":13,"total":13}},"system":{"load":{"1":1.24,"15":0.6,"5":0.89,"norm":{"1":0.31,"15":0.15,"5":0.2225}}}}}}

Logstash restarted timeline log:

[2019-09-11T15:16:36,298][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
..................
[2019-09-11T15:16:45,397][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Filebeat repository entry:

  {
    "source": "/var/lib/docker/containers/3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c/3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c-json.log",
    "offset": 794342,
    "timestamp": "2019-09-11T15:20:04.5542604Z",
    "ttl": -1,
    "type": "container",
    "meta": null,
    "FileStateOS": {
      "inode": 802588,
      "device": 2049
    }
  }

Log file stats:

sh-4.2# stat 3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c-json.log
  File: '3252b7646a23293b6728941769f0412e2bd4b74b801ee09ab747c7cdfa74550c-json.log'
  Size: 911932          Blocks: 1784       IO Block: 4096   regular file
Device: 801h/2049d      Inode: 802588      Links: 1
Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-09-11 17:39:48.711393000 +0000
Modify: 2019-09-11 17:19:37.112086700 +0000
Change: 2019-09-11 17:19:37.112086700 +0000
 Birth: -

Hope I provided enough details to investigate this problem.


Elastic stack 7.3.0

Platforms bug

Most helpful comment

Happens to me as well (7.9.0, docker autodiscover). Filebeat closes and reopens the missing container log file serveral times due to inactivity and then simply stops watching the file after a while.

Just curios, do you guys have logrotation enabled for docker (I do...)?

$ cat /etc/docker/daemon.json 
{
"log-driver": "json-file",
    "log-opts": {
        "max-size": "100m", 
        "max-file": "3"
    }
}

All 9 comments

After restarting filebeat docker service, logs are populated correctly.
Filebeat should have also harvested logs without needing a restart.

I have the same issue on versions from 7.2 till 7.6.2. I have 3 similar containers with filebeat in nomad. 1 of them has high load (~5000 messages per minute). And it stucks once a day. Reboot of filebeat-container helps.

I am also facing this problem with filebeat 7.0.1, it runs and harvest for a while and eventually stops with those messages.

And I am alse facing the same problem with filebeat 7.7.1,it‘’s harvested logs after restart,and only once

I'm facing the same issue since 7.6.1, 7.8.1, 7.8.2-SNAPSHOT and now with 7.9.0, even after a lot of PRs trying to fix that.

My issue is related to k8s autodiscover.

Happens to me as well (7.9.0, docker autodiscover). Filebeat closes and reopens the missing container log file serveral times due to inactivity and then simply stops watching the file after a while.

Just curios, do you guys have logrotation enabled for docker (I do...)?

$ cat /etc/docker/daemon.json 
{
"log-driver": "json-file",
    "log-opts": {
        "max-size": "100m", 
        "max-file": "3"
    }
}

Pinging @elastic/integrations-platforms (Team:Platforms)

Original issue reported in 7.3.0, of inputs incorrectly stopped on container restarts, was fixed by #13127 in 7.4.0.

@pauvos could you share the autodiscover configuration you are using?

Was this page helpful?
0 / 5 - 0 ratings