Beats: filebeat error and excessive memory usage with autodiscover

Created on 14 Nov 2018  路  16Comments  路  Source: elastic/beats

This is a new ticket based on the closed but on going issues noted here (https://github.com/elastic/beats/issues/6503) filebeat logs show several errors and memory usage grows until the pod is shut down by kubernetes. I am using filebeat v6.4.3.

filebeat logs show the following errors:
2018-11-14T16:50:43.002Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF
2018-11-14T16:50:43.004Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error proto: wrong wireType = 6 for field ServiceAccountName

My configuration:

    setup.template.enabled: true
    setup.dashboards.enabled: false

    #Kubernetes AutoDiscover
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:

            #JSON LOGS
            - condition:
                equals:
                  kubernetes.labels.json_logs: "true"
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  json.keys_under_root: true
                  json.add_error_key: true
                  processors:
                    - add_kubernetes_metadata:
                        in_cluster: true
                    - drop_fields:
                        fields: ["OriginContentSize","Overhead","BackendURL.Fragment","BackendURL.Scheme","ClientUsername","source","request_Cookie","request_Proxy-Authenticate","downstream_X-Authentication-Jwt","downstream_Set-Cookie","origin_Set-Cookie","downstream_Cache-Control"]

            #Non-JSON logs
            - condition:
                not:
                  equals:
                    kubernetes.labels.json_logs: "true"
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  processors:
                    - add_kubernetes_metadata:
                        in_cluster: true

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['<ELASTICSEARCH>:443']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
    tags: ["<ENV>"]
    logging.level: warning
    logging.json: false
Filebeat Integrations bug containers

Most helpful comment

I just tried with the latest 6.6.0 and Have the samee issues. The pods are each allowed up to 768mb of Memory, an enormous amount and they still run out. If it helps I'm a paying elasticsearch hosted customer and this issue has been going on for months, is there anything else I can do to get you guys to look more into this, it's getting really old.

screen shot 2019-02-11 at 11 24 03 am

All 16 comments

Thanks for opening this @bwunderlich824, could you share a broader log? I'm particularly interested in other messages, errors and timings.

Best regards

Not alot else in there besides what was posted above.
logs.txt

Same issue running filebeat v6.5.1. OK when pods first restarted but after a while seeing the same issue on every filebeat pod in the daemonset. k8s version is 1.8.10. I'll enable debug and post when it happens again.
2018-12-03T23:00:46.927Z WARN [cfgwarn] kubernetes/kubernetes.go:51 BETA: The kubernetes autodiscover is beta 2018-12-03T23:00:46.938Z WARN [cfgwarn] hints/logs.go:56 BETA: The hints builder is beta 2018-12-03T23:38:02.665Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF 2018-12-04T00:34:28.544Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF

This seems to be the sequence every time the Watching API error hits:

2018-12-04T22:06:43.754Z        ERROR   kubernetes/watcher.go:254       kubernetes: Watching API error EOF
2018-12-04T22:06:43.754Z        INFO    kubernetes/watcher.go:238       kubernetes: Watching API for resource events
2018-12-04T22:06:43.758Z        INFO    input/input.go:149      input ticker stopped
2018-12-04T22:06:43.758Z        INFO    input/input.go:167      Stopping Input: 7990041433892801910
2018-12-04T22:06:43.758Z        INFO    log/harvester.go:275    Reader was closed: /var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268-json.log. Closing.
2018-12-04T22:06:43.758Z        ERROR   [autodiscover]  cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:7870214-51713 Finished:false Fileinfo:0xc42045b1e0 Source:/var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268-json.log Offset:230605 Timestamp:2018-12-04 22:06:20.083047508 +0000 UTC m=+4508.847580429 TTL:-1ns Type:docker Meta:map[] FileStateOS:7870214-51713}
2018-12-04T22:06:43.759Z        INFO    log/input.go:138        Configured paths: [/var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/*.log]
2018-12-04T22:06:43.759Z        INFO    input/input.go:114      Starting input of type: docker; ID: 7990041433892801910
2018-12-04T22:06:53.760Z        INFO    log/harvester.go:254    Harvester started for file: /var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268-json.log

Issues still persists with filebeat 6.5.3

Issues still persists with filebeat 6.5.4

Hi, we did some mayor refactor for 6.6: https://github.com/elastic/beats/pull/8851 and I think it should help here, any chance you can give a try to the snapshot image?

I pushed it to exekias/filebeat:6.6-snapshot, please take into account that this is a yet unreleased version and it's not minded for production.

I pushed that version out to our development environment and I've had the same issue, the kubernetes pod just sucks up more memory until it reboots. Same logs as before.

logs.txt

We are having this problem with filebeat 6.4.2

2019-01-31T03:04:11.614Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF

I see a similar issue on my kubernetes clusters, filebeat will continue to use memory until exhausted, logging the messages described by @gamer22026.

It's a fairly linear leak, I don't see any huge steps in usage:

image

@cnelson Which version of filebeat are you using?

I just tried with the latest 6.6.0 and Have the samee issues. The pods are each allowed up to 768mb of Memory, an enormous amount and they still run out. If it helps I'm a paying elasticsearch hosted customer and this issue has been going on for months, is there anything else I can do to get you guys to look more into this, it's getting really old.

screen shot 2019-02-11 at 11 24 03 am

The issue persists with 7.0.0-alpha2 too.

I'm having the same issue, and having the same messages being logged by filebeat, plus some other ones.

I've posted more details on this other issue here: https://github.com/elastic/beats/issues/9302#issuecomment-471120645

Upgraded to v7.0.1 and still having the same issues.

this should be now fixed, more details can be found here: #9302

Was this page helpful?
0 / 5 - 0 ratings