Describe the bug
Some container logs were not received by ElasticSearch (for example, fluent-bit logs at least). And Fluent Bit produces the following warns:
[2019/11/05 02:52:03] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:05] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:05] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:27] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:27] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:34] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:34] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:34] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:34] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:35] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:52:35] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:53:02] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
[2019/11/05 02:53:02] [ warn] [filter_nest] Value of key 'kubernetes' is not a map. Will not attempt to lift from here
To Reproduce
{"log":"YOUR LOG MESSAGE HERE","stream":"stdout","time":"2018-06-11T14:37:30.681701731Z"}
Expected behavior
I expect fluent-bit successfully forward my container logs to elasticsearch output successfully without missing a record and no warn message spams
Screenshots
Your Environment
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Refresh_Interval 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
DB /tail-db/tail-containers-state.db
DB.Sync Normal
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
[FILTER]
Name nest
Match kube.*
Operation lift
Nested_under kubernetes
Prefix_with kubernetes_
[FILTER]
Name modify
Match kube.*
Remove stream
[FILTER]
Name modify
Match kube.*
Remove kubernetes_labels
[FILTER]
Name modify
Match kube.*
Remove kubernetes_annotations
[FILTER]
Name modify
Match kube.*
Remove kubernetes_pod_id
[FILTER]
Name nest
Match kube.*
Operation nest
Wildcard kubernetes_*
Nested_under kubernetes
Remove_prefix kubernetes_
[OUTPUT]
Name es
Match kube.*
Host elasticsearch-logging-data.kubesphere-logging-system.svc
Port 9200
Logstash_Format On
Replace_Dots on
Retry_Limit False
Type flb_type
Time_Key @timestamp
Logstash_Prefix ks-logstash-log
Additional context
It seems I have a duplicate nest filter in my configuration. will check and see if I can close this issue
[FILTER]
Name nest
Match kube.*
Operation lift
Nested_under kubernetes
Prefix_with kubernetes_
This problem still occurs. I corrected my configuration and updated in my commnet.
This could be related to https://github.com/fluent/fluent-bit/issues/1691