Fluent-bit: Duplicate @timestamp fields in elasticsearch output

Created on 11 Jun 2018  路  24Comments  路  Source: fluent/fluent-bit

I am trying to replace my fluentd installation in kubernetes with fluent-bit 0.13.3 but ran into an issue. We currently have the standard setup:

[INPUT]
        Name             tail
        Path             /var/log/containers/*.log
        Parser           docker
        Tag              kube.*
        Refresh_Interval 5
        Mem_Buf_Limit    5MB
        Skip_Long_Lines  On
[FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL           https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Merge_JSON_Log      On
[OUTPUT]
        Name  es
        Match *
        Host  HOST
        Port  9200
        Logstash_Format On
        Retry_Limit False
       Type flb_type

The problem is that some of the log messages from services are json encoded and also include a @timestamp field. This then causes some errors:

[2018/06/11 15:22:49] [ warn] [out_es] Elasticsearch error
{"took":78,"errors":true,"items":[{"index":{"_index":"logstash_test-2018.06.11","_type":"flb_type","_id":"ZPhx72MBChql05IASc5e","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":142,"_primary_term":1,"status":201}},{"index":{"_index":"logstash_test-2018.06.11","_type":"flb_type","_id":"Zfhx72MBChql05IASc5e","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field '@timestamp'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@6dda49d4; line: 1, column: 509]"}}}},{"index":{"_index":"logstash_test-2018.06.11","_type":"flb_type","_id":"Zvhx72MBChql05IASc5e","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"i_o_exception","reason":"Duplicate field '@timestamp'\n at [Source: org.elasticsearch.common.bytes.BytesReferenc

I tried to use Merge_JSON_Key to mitigate this, but the option seems to be disabled in the source code (without mentioning it in the docs, took me some time to figure out why it does not work ;-)). In my opinion the Merge_JSON_Log should overwrite existing keys instead of having duplicate keys.

fixed question

Most helpful comment

@edsiper I would reopen this,

I have this config:

    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Merge_Log           On
        K8S-Logging.Parser  On

    [OUTPUT]
        Name            es
        Match           *
        Host            ${FLUENT_ELASTICSEARCH_HOST}
        Port            ${FLUENT_ELASTICSEARCH_PORT}
        Logstash_Format On
        Logstash_Prefix ${FLUENT_ELASTICSEARCH_PREFIX}
        Time_Key        es_time
        Retry_Limit     False
        Generate_ID     On

and keep getting error like this one:

{"took":0,"errors":true,"items":[{"index":{"_index":"gslogs-2018.08.28","_type":"flb_type","_id":"0f921680-ccbc-0ba9-f08b-56a14e42386a","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'time'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@64099158; line: 1, column: 651]"}}}},{"index":{"_index":"gslogs-2018.08.28","_type":"flb_type","_id":"a75811af-ebc7-17a3-02a7-cc167a31a90a","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'time'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@632e4f1b; line: 1, column: 668]"}}}}]}

All 24 comments

I have the same issue. kibana for example produces logs containing @timestamp fields. my own applications i was able to fix by renaming the timestamp field.

I'm also having the same issue. Has anyone found a workaround yet?

to avoid that duplicate field you can set an alternative name for the time field (Time_Key):

https://fluentbit.io/documentation/0.13/output/elasticsearch.html

let me know if that fixes the issue

that should do it, thanks :)

Having to rename it is patchwork, so, think about providing better defaults instead.

@edsiper I would reopen this,

I have this config:

    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Merge_Log           On
        K8S-Logging.Parser  On

    [OUTPUT]
        Name            es
        Match           *
        Host            ${FLUENT_ELASTICSEARCH_HOST}
        Port            ${FLUENT_ELASTICSEARCH_PORT}
        Logstash_Format On
        Logstash_Prefix ${FLUENT_ELASTICSEARCH_PREFIX}
        Time_Key        es_time
        Retry_Limit     False
        Generate_ID     On

and keep getting error like this one:

{"took":0,"errors":true,"items":[{"index":{"_index":"gslogs-2018.08.28","_type":"flb_type","_id":"0f921680-ccbc-0ba9-f08b-56a14e42386a","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'time'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@64099158; line: 1, column: 651]"}}}},{"index":{"_index":"gslogs-2018.08.28","_type":"flb_type","_id":"a75811af-ebc7-17a3-02a7-cc167a31a90a","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'time'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@632e4f1b; line: 1, column: 668]"}}}}]}

I have this issue w/ build from git (latest). I'm not sure what a workaround would be.

I am using via helm chart in Kubernetes, w/ mergeJSONLog: true enabled, and an annotation of apache2 on one of the pods. since its doing a tail -f on the docker logs, they are first parsed as docker, and then parsed as apache2. this causes a duplicate.

[2018/09/13 01:20:30] [ warn] [out_es] Elasticsearch error
{"took":6,"errors":true,"items":[{"index":{"_index":"logstash-2018.09.12","_type":"flb_type","_id":"T7KD0GUB5MRKjjOx59E0","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field '@timestamp'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@7aea1496; line: 1, column: 708]"}}}},{"index":{"_index":"logstash-2018.09.12","_type":"flb_type","_id":"ULKD0GUB5MRKjjOx59E0","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field '@timestamp'\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@2873f968; line: 1, column: 708]"}}}},{"index":{"_index":"logstash-2018.09.12","_type":"flb_type","_id":"UbKD0GUB5MRKjjOx59E0","status":400,"error":{"type":"mapper_parsing_exception","reason"

I used

    [FILTER]
        Name record_modifier
        Match *
        Remove_key @timestamp

馃憢 late to the party, but here we are 馃槀

for those using the helm chart, the below works with chart version 1.0.0

  backend:
    type: es
    es:
      time_key: '@ts'

as you can see @timestamp is still present, but different issue.

screen shot 2018-12-04 at 8 09 25 pm

@lxfontes this only solves the timestamp problem for me but I still have 'duplicate "time" fields'. Why do we actually need this?

Ok so for me the fix was for the kubernetes chart setting:

  filter:
    mergeJSONLog: false

As now it will not longer try to merge keys. Guess this is a bug in fluentd as the expected behavior should be that while doing the merge it MUST NOT try to append the field.

Isn't it more convenient to change the behavior of Elasticsearch plugin such that, it won't append the @timestamp key (or Time_Key in general) if it already exists?

This needs to be reopened.

Consider that the official Logstash formatter for log4j, https://github.com/logstash/log4j-jsonevent-layout is going to output the @timestamp to standard out. So a developer, encountering this bug, is going to wonder why their Logstash-format JSON is considered invalid by something that describes itself as Logstash compatible.

I am facing same issue when generating logs in json format following standard plugin https://github.com/logstash/log4j-jsonevent-layout. I agree with @wirehead comment

JSON with keys the same name is not invalid. But make sense it's a restriction for elasticsearch, your workarounds:

  • If you are in Kubernetes, use the option Merge_Log_Key in your Kubernetes filter, so your unpacked data will be under that new key, avoiding duplicates if any:

https://docs.fluentbit.io/manual/filter/kubernetes

  • if you are ingesting straight to Elasticsearch, just change the name of the key that holds the timestamp with the option: Time_Key:

https://docs.fluentbit.io/manual/output/elasticsearch

Now If I implement a kind of "sanitizer" option, take in count it will affect performance. Options above should work, if don't please let me know.

2nd option sounded more cleaner to me and that worked well as well

i think this need to reopened

Had to try all options to make it work with Kubana discovery+logs:

  • changing Time_Key breaks features in Kibana that expect @timekey and doesnt solve duplicates in other fields (like time)
  • turning off merge with mergeJSONLog: false makes all my application logs(Serilog) unusable jsons

The only thing that worked for me is adding a prefix to merged fields:
mergeLogKey: "appLog"

I fixed with following config

`[PARSER]

Name        docker

Format      json

#Time_Key   time

Time_Format %Y-%m-%dT%H:%M:%S.%L

Time_Keep   off #On 

# Command      |  Decoder | Field | Optional Action

# =============|==================|=================

Decode_Field_As   escaped    log`

Had to try all options to make it work with Kubana discovery+logs:

  • changing Time_Key breaks features in Kibana that expect @timekey and doesnt solve duplicates in other fields (like time)
  • turning off merge with mergeJSONLog: false makes all my application logs(Serilog) unusable jsons

The only thing that worked for me is adding a prefix to merged fields:
mergeLogKey: "appLog"

Hi @Vfialkin, currently i'm working with serilog too, following your instruction, it's work fine but i still cannot get all log in log field like
image
Sr for my bad English, but did you know how to solve it ? Thanks

Had to try all options to make it work with Kubana discovery+logs:

  • changing Time_Key breaks features in Kibana that expect @timekey and doesnt solve duplicates in other fields (like time)
  • turning off merge with mergeJSONLog: false makes all my application logs(Serilog) unusable jsons

The only thing that worked for me is adding a prefix to merged fields:
mergeLogKey: "appLog"

Hi @Vfialkin, currently i'm working with serilog too, following your instruction, it's work fine but i still cannot get all log in log field like
image
Sr for my bad English, but did you know how to solve it ? Thanks

Hi Minhnhat,
By default, the whole log message will also be added to the index as a 'Log' field so it's not a problem. You can disable it via:

filter: |-
    Keep_Log  off

Take a look at my yml, maybe that will help:

I also tried to describe full setup process in by
blog

Had to try all options to make it work with Kubana discovery+logs:

  • changing Time_Key breaks features in Kibana that expect @timekey and doesnt solve duplicates in other fields (like time)
  • turning off merge with mergeJSONLog: false makes all my application logs(Serilog) unusable jsons

The only thing that worked for me is adding a prefix to merged fields:
mergeLogKey: "appLog"

Hi @Vfialkin, currently i'm working with serilog too, following your instruction, it's work fine but i still cannot get all log in log field like
image
Sr for my bad English, but did you know how to solve it ? Thanks

Hi Minhnhat,
By default, the whole log message will also be added to the index as a 'Log' field so it's not a problem. You can disable it via:

filter: |-
    Keep_Log  off

Take a look at my yml, maybe that will help:

I also tried to describe full setup process in by
blog

Hi @Vfialkin, it's really helpful. Thank you !

@Vfialkin thanks for the article, it was super helpful.

One thing though:

extraEntries:
   input: |- 
     Exclude_Path   /var/log/containers/kibana*.log,/var/log/containers/kube*.log,/var/log/containers/etcd-*.log,/var/log/containers/dashboard-metrics*.log

This would not work as expected: at least in the current version of the chart: https://github.com/helm/charts/blob/b71c8c665e7de2ef22e915cd2f173d680cd7636c/stable/fluent-bit/templates/config.yaml

Those extra entries are being appended to the end of the config, and if systemd is enabled, then they are appended to the systemd's section.

I sent a PR to the chart, see below

@zerkms thanks for the feedback! Nice catch with systemd section, so it works by coincidence because systemd is defaulted to false 馃榿 It would definitely be better to have it as a param, hope your PR will get merged soon. 馃

Was this page helpful?
0 / 5 - 0 ratings

Related issues

arienchen picture arienchen  路  3Comments

mhf-ir picture mhf-ir  路  4Comments

edsiper picture edsiper  路  4Comments

mbelchin picture mbelchin  路  3Comments

edsiper picture edsiper  路  4Comments