Describe the bug
We are sending kuberenetes containers logs to the splunk using Fluent bit and having one lua script in between to insert the index name for dynamic indexes.
What we have seens while running the configuration if we are using kube.* everything works fine but if we are using kube-xyz.* it is not working fine.
[SERVICE]
Flush 5
Daemon Off
Log_Level Debug
Config_Watch on
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
[INPUT]
Name tail
Path /var/log/containers/*grocery-fe*.log
Tag kube.*
Parser docker
Refresh_Interval 5
Mem_Buf_Limit 10MB
Skip_Long_Lines On
DB /tail-db/tail-containers-state-grocery-fe.db
DB.Sync Normal
[FILTER]
Name kubernetes
Match kube-customer-exeperience.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
K8S-Logging.Parser On
K8S-Logging.Exclude On
[FILTER]
Name record_modifier
Match kube-customer-exeperience.*
Record cluster_id southcentralus-dev-afonly
Record sourcetype kubernetes
[FILTER]
Name lua
Match kube-customer-exeperience.*
script rename_index.lua
call cb_replace
[OUTPUT]
Name splunk
Match kube-customer-exeperience.*
Host xxxxxxxxxx
Port 8088
Format json
URI /services/collector/event
tls Off
json_date_key time
Splunk_Token xxxxxxxxxxxxxxxxxxx
[OUTPUT]
Name stdout
Match kube-customer-exeperience.*
parsers.conf:
----
[PARSER]
Name docker
Format json
Time_Keep true
Time_Key time
#Time_Format %Y-%m-%dT%H:%M:%S.%L
Decode_Field_As escaped_utf8 log do_next
Decode_Field_As json log
[PARSER]
Name parse-audit-apiserver
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
[2019/05/23 12:07:47] [error] [filter_lua] invalid table returned at cb_replace(), /fluent-bit/etc/rename_index.lua
[2019/05/23 12:07:47] [debug] [in_tail] file=/var/log/containers/logs-generator-3_grocery-fe_logs-generator-3-bff537295acb0cf8fe2fb5fad66e4b2811b264c8142d80c4b27ab6f856f0147f.log read=32622 lines=187
[2019/05/23 12:07:47] [debug] [filter_kube] API Server (ns=grocery-fe, pod=e.var.log.containers.logs-generator-3) http_do=0, HTTP Status: 404
[2019/05/23 12:07:47] [debug] [filter_kube] API Server response
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"e.var.log.containers.logs-generator-3\" not found","reason":"NotFound","details":{"name":"e.var.log.containers.logs-generator-3","kind":"pods"},"code":404}
```
Expected behavior
[141] kube.var.log.containers.logs-generator-3_grocery-fe_logs-generator-3-bff537295acb0cf8fe2fb5fad66e4b2811b264c8142d80c4b27ab6f856f0147f.log: [1558613436.438133001, {"index"=>"wcnp_grocery-fe", "sourcetype"=>"kubernetes", "cluster_id"=>"southcentralus-dev-afonly", "log"=>"I0523 11:05:47.233388 9 logs_generator.go:67] 19287353 PUT /api/v1/namespaces/kube-system/pods/3x1 204n", "time"=>"2019-05-23T11:05:47.233429362Z", "kubernetes"=>{"pod_name"=>"logs-generator-3", "pod_id"=>"0561541b-7d47-11e9-a665-000d3a5eda37", "docker_id"=>"bff537295acb0cf8fe2fb5fad66e4b2811b264c8142d80c4b27ab6f856f0147f", "labels"=>{"run"=>"logs-generator-3"}, "host"=>"worke3534000001", "container_name"=>"logs-generator-3", "namespace_name"=>"grocery-fe"}, "stream"=>"stderr"}]
Screenshots
Your Environment
just to say i've seen something similar where i used tags that are not kube.* and things stopped working.
When I revert back to kube.*, tag Match and expansion seems to work again.
Likely you have two problems:
For Tagging and filter kubernetes setup please refer to this upgrade notes:
https://docs.fluentbit.io/manual/installation/upgrade_notes
Also make sure to use the latest Fluent Bit version available.
@kiich same here, the newest version breaks something when you're not using kube.*
Same here. Very simple config for testing fluent-bit (actually using fluentd, but wanted to migrate) and it breaks as @AkshayDubey29
fluent-bit.conf
[SERVICE]
Flush 1
Log_Level debug
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf`
input-kubernetes.conf
```[INPUT]
Name tail
Tag kube.http-show-headers.*
Path /var/log/containers/http-show-headers*.log
Parser docker
DB /var/log/flb_http-show-headers.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
filter-kubernetes.conf
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
output-elasticsearch.conf
[OUTPUT]
Name es
Match kube.*
Host elasticsearch.logs.svc.cluster.local.
Port 9200
Logstash_Format On
Replace_Dots On
Retry_Limit False
parsers.conf
[PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
And the error:
[2019/09/26 15:39:20] [debug] [in_tail] file=/var/log/containers/http-show-headers-8596467c4d-w4jlj_logs_python-c8b2255d9713eaed4a5bc046538e1a7fd443bac7c3a881464e50d149e2ebf54e.log read=138 lines=1
[2019/09/26 15:39:20] [debug] [task] created task=0x7fe2a4852780 id=2 OK
[2019/09/26 15:39:21] [debug] [out_es] HTTP Status=200 URI=/_bulk
[2019/09/26 15:39:21] [debug] [retry] new retry created for task_id=2 attemps=1
[2019/09/26 15:39:21] [debug] [sched] retry=0x7fe2a480a910 2 in 10 seconds
[2019/09/26 15:39:23] [debug] [out_es] HTTP Status=200 URI=/_bulk
[2019/09/26 15:39:23] [debug] [retry] re-using retry for task_id=0 attemps=2
[2019/09/26 15:39:23] [debug] [sched] retry=0x7fe2a480ac58 0 in 29 seconds
[2019/09/26 15:39:28] [debug] [in_tail] file=/var/log/containers/http-show-headers-8596467c4d-w4jlj_logs_python-8b2255d9713eaed4a5bc046538e1a7fd443bac7c3a881464e50d149e2ebf54e.log event
[2019/09/26 15:39:28] [debug] [filter_kube] API Server (ns=logs, pod=ar.log.containers.http-show-headers-8596467c4d-w4jlj) http_do=0, HTTP Status: 404
[2019/09/26 15:39:28] [debug] [filter_kube] API Server response
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"ar.log.containers.http-show-headers-8596467c4d-w4jlj\" not found","reason":"NotFound","details":{"name":"ar.log.containers.http-show-headers-8596467c4d-w4jlj","kind":"pods"},"code":404}
```
@javipolo
Note that your config has a mismatch between Tag prefix in tail and the kube_tag_prefix set in filter kubernetes, e.g:
tail:
kube.http-show-headers.*
filter_kubernetes:
Kube_Tag_Prefix kube.var.log.containers.
your kube_tag_prefix should be:
Kube_Tag_Prefix kube.http-show-headers.var.log.containers.
please refer to the upgrade notes: https://docs.fluentbit.io/manual/installation/upgrade_notes#kubernetes-filter-1
I am closing this ticket since the main issue is solved.
If you still have an issue with the setup let's follow up in a new ticket.
Most helpful comment
@javipolo
Note that your config has a mismatch between Tag prefix in tail and the kube_tag_prefix set in filter kubernetes, e.g:
tail:
filter_kubernetes:
your kube_tag_prefix should be:
please refer to the upgrade notes: https://docs.fluentbit.io/manual/installation/upgrade_notes#kubernetes-filter-1