Describe the bug
When using the Loki output plugin with the Tail input plugin and Kubernetes filter we're getting an error while using auto_kubernetes_labels on. We know that Loki is working correctly and that we can ship our logs there as we have a working Promtail deployment which we're trying to replace with this code.
To Reproduce
Using the following configuration on a K8s daemonset.
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Docker_Mode On
Docker_Mode_Flush 4
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Merge_Log_Key log_processed
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
[OUTPUT]
Name loki
Match kube.*
Host loki.logging.svc
Port 3100
Labels test=aaa
auto_kubernetes_labels on
The error we're getting.
2020-11-20T12:39:52.544338589Z [2020/11/20 12:39:52] [ warn] [engine] failed to flush chunk '1-1605875992.200309438.flb', retry in 9 seconds: task_id=0, input=tail.0 > output=loki.1
2020-11-20T12:39:53.539678059Z [2020/11/20 12:39:53] [error] [output:loki:loki.1] loki.logging.svc:3100, HTTP status=400
2020-11-20T12:39:53.539713222Z error parsing labels: parse error at line 1, col 50: syntax error: unexpected ., expecting = or =~ or !~ or !=
Expected behavior
Logs should be shipped to Loki correctly.
Screenshots
n/a
Your Environment
Additional context
I've followed the guide which is well written and just need to figure out why this isn't working as expected.
Fixed in f34df48f361adab95455aa7ee52ae356d8010b7a
The fix will be part of the next v1.6.5 release.
Thanks @edsiper for the super quick fix! Do you have a rough idea when v1.6.5 will be released?
For anyone else with this issue @edsiper has released v1.6.5 today.
thanks everyone!
I think some features might still need to be polished, but please share your feedback!
Oh, we still have the same issue with v.1.6.6
We've got _v1.6.6_ working well.