Beats: Metricbeat pct fields can be float and long which causes elasticsearch to throw an exception

Created on 28 Aug 2017  ·  28Comments  ·  Source: elastic/beats

v6.0.0-beta1:

I'm using metricbeat to send normalized pct fields. Metricbeat sends the data to logstash which sends it to elastic search. All versions are v6.0.0-beta1.

I got this error on my elasticsearch server:
[metricbeat-2017.08.28][0] failed to execute bulk item (index) BulkShardRequest [[metricbeat-2017.08.28][0]] containing [4] requests
java.lang.IllegalArgumentException: mapper [system.process.cpu.total.norm.pct] cannot be changed from type [float] to [long]

This is because metricbeat sends sometimes the values as 2.0 or 2 instead of always as a float.

I was able to find a work around by setting a default template for my metricbeat index:
This will map all pct fields that are seen as integers back to float as they should.
Either this should be a part of the default template or just fixed at the lower level of the value creation (the latter is preferred).

{ "template": "metricbeat-*", "version": 60001, "settings": { "index.refresh_interval": "30s" }, "mappings": { "_default_": { "dynamic_templates": [ { "string_fields": { "path_unmatch": "*.pct", "match_mapping_type": "string", "mapping": { "type": "keyword", "norms": false } } }, { "percentage_fields_long_to_float": { "path_match": "*.pct", "match_mapping_type": "long", "mapping": { "type": "float" } } }], "properties": { "@timestamp": { "type": "date" }, "@version": { "type": "keyword" } } } } }

Integrations bug libbeat question

All 28 comments

@randude Metricbeat comes with its own template, which you should make sure to load in ES. Normally, when sending the data directly to ES, this happens automatically, but not when using Logstash as an intermediary point.

There are two ways of solving this. You can run metricbeat setup from a machine that has access to ES to setup the templates and the dashboards:

metricbeat setup -e -E output.elasticsearch.hosts=...

Or you can export the template and then use the manage_template options from LS to load it.

metricbeat export template > metricbeat-template.json

I will close this one as a "question" for now, because we prefer questions to go to the discuss forums.

@tsg I'm not sure why you closed this. Metricbeat should NOT send float numbers sometimes as integers. It should always send as a float.
My mapping is a workaround and it doesnt solve the issue at hand.

Generally speaking, using Metricbeat without it's template is going to result in a lot of errors, so the correct solution is to use the Metricbeat template.

That said, we do have code that should write all floats in the dotted format, so I'm reopening this to investigate that.

@tsg i see something similar here. I have a metricbeat export (json dump). Indexing using the template results in some pct fields being mapped as float, others as long - see below. How should core pct values be mapped given the number is undefined? looking at the template this appears to be dynamic.

{
  "cpu": {
    "properties": {
      "core": {
        "properties": {
          "0": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "1": {
            "properties": {
              "pct": {
                "type": "long"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "2": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "3": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "4": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "5": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "6": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "7": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "8": {
            "properties": {
              "pct": {
                "type": "long"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "9": {
            "properties": {
              "pct": {
                "type": "long"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "10": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "11": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "12": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "13": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "14": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "15": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          }
        }
      },
      "kernel": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          },
          "ticks": {
            "type": "long"
          }
        }
      },
      "system": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          },
          "ticks": {
            "type": "long"
          }
        }
      },
      "total": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          }
        }
      },
      "user": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          },
          "ticks": {
            "type": "long"
          }
        }
      }
    }
  }
}

i think i know what causes this - 0 values for pct cause the node to try and map the field as a long, anything else is mapped as a float. If 2 docs are indexed at the same time, one with a 0 and another with a float for the cpu pct value, the second attempt for a dynamic mapping can be rejected.

Adding this to the metricbeat mapping resolves the issue i think:

{
    "docker.cpu.core.pct": {
      "path_match": "docker.cpu.core.*.pct",
      "mapping": {
        "type": "float"
      }
    }
},
{
    "docker.cpu.core.ticks": {
      "path_match": "docker.cpu.core.*.ticks",
      "mapping": {
        "type": "long"
      }
    }
}

The alternative would be just to ensure "0" is passed as a float.

@gingerwizard What you report above we should have in our template. Could you open a separate issue for that? How to do it was kind of an open question: https://github.com/elastic/beats/blob/master/metricbeat/module/docker/cpu/_meta/fields.yml#L37 And you have the solution I think.

If you also have json events which are not part of the template, this upcoming feature should help: https://github.com/elastic/beats/pull/6024

@gingerwizard @ruflin I just did a fresh install and the problem is definitely still happening with 6.2.3:

[2018-04-01T20:00:05,442][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-6.2.3-2018.04.02", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x2aa897b5>], :response=>{"index"=>{"_index"=>"metricbeat-6.2.3-2018.04.02", "_type"=>"doc", "_id"=>"2_Wng2IBJsKPXoGhVOah", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [docker.cpu.core.23.pct] cannot be changed from type [long] to [float]"}}}}

@ctindel I seems we never opened an issue for it so we forgot about it :-( As it's a different issue from the issue reported here initially we should have a separate issue for it. Could you open one?

Hi @ruflin - Is this treated as an open issue? I'm able to reproduce it very easily when I try to point metricbeat at logstash instead of direct to elasticsearch

  • Ubuntu 16.04 vm
  • ELK stack 6.2.4
  • Setup elasticsearch, kibana, logstash, metricbeats from zips
  • No configuration changes
  • Create simple pipeline for logstash as per wiki (below)
  • Start elasticsearch, kibana, logstash
  • Run 'metricbeat setup -e'
  • Run 'metricbeat' and see entries appear in Kibana
  • Configure metricbeat.yml to output to logstash and comment out the elasticsearch output.
  • Launch metricbeat
  • See errors in elasticsearch logs (below)

logstash-simple.conf:

input{
    beats {
        port => "5044"
    }
}

filter {
}

output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
    }
}

elasticsearch log error:

java.lang.IllegalArgumentException: mapper [system.filesystem.used.pct] cannot be changed from type [float] to [long]
    at org.elasticsearch.index.mapper.MappedFieldType.checkTypeName(MappedFieldType.java:150) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.index.mapper.MappedFieldType.checkCompatibility(MappedFieldType.java:162) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:128) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.index.mapper.FieldTypeLookup.copyAndAddAll(FieldTypeLookup.java:94) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:426) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:353) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:285) ~[elasticsearch-6.2.4.jar:6.2.4]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:313) ~[elasticsearch-6.2.4.jar:6.2.4]

@beirtipol the fix was already merged for the 6.3 branch

@ctindel thanks. Any ETA on when 6.3 might be released? (I'm hunting around the elastic.co site but can't see any indications)

Closing this issue as it will be resolved in 6.3

@beirtipol We don't announce any exact release dates but you can expect it in a few weeks. If you want to try it earlier, I can share some snapshot builds from master.

I can hold off for a few weeks, thanks Nicolas. I can workaround by
pointing metricbeat at elastic directly

On Wed, 25 Apr 2018, 14:16 Nicolas Ruflin, notifications@github.com wrote:

Closing this issue as it will be resolved in 6.3

@beirtipol https://github.com/beirtipol We don't announce any exact
release dates but you can expect it in a few weeks. If you want to try it
earlier, I can share some snapshot builds from master.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/elastic/beats/issues/5032#issuecomment-384282439, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABMHbKzJIBbZMbIchJcFd1x-01FrGBx8ks5tsHcvgaJpZM4PEQoV
.

We are seeing the same issue with dynamic fields from windows.perfmon module (metricbeat 6.3.0 and 6.4.0)

Is it possible that this default is the cause: https://github.com/elastic/beats/blob/b2416dace57a80a1550f14e4011e2284d17e2fa4/metricbeat/module/windows/perfmon/pdh_windows.go#L414

Sending 0 instead of 0.0 (in case of float-format) seems to cause alot of trouble.

I have the same issue with dynamic mapping of the perfmon module.

currently we are using setup.template.append_fields , but it's experimental (https://www.elastic.co/guide/en/beats/metricbeat/master/configuration-template.html)

I also got this issue the other day:
https://discuss.elastic.co/t/cannot-be-changed-from-type-float-to-long/147710

I look forward to the update 👍

same here with metricbeat (metricbeat-6.5.4-1.x86_64) and logstash (logstash-6.5.4-1.noarch)

the error:

[2019-01-04T05:35:34,452][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.5.4-2019.01.04", :_type=>"doc", :routing=>nil}, #], :response=>{"index"=>{"_index"=>"filebeat-6.5.4-2019.01.04", "_type"=>"doc", "_id"=>"7QcAGGgBoaWI95dE0-57", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.process.memory.rss.pct] cannot be changed from type [long] to [float]"}}}}

I have similar problem, too.
I'm using metricbeat (windows 2012 r2) version 6.5.4 (amd64), libbeat 6.5.4 [bd8922f1c7e93d12b07e0b3f7d349e17107f7826 built 2018-12-17 20:29:15
Default configuration (System module). Logstash and ES 6.5.4 on Docker (Ubuntu 18.04).
Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-6.5.4-2019.01.07", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x78b7a8cb>], :response=>{"index"=>{"_index"=>"metricbeat-6.5.4-2019.01.07", "_type"=>"doc", "_id"=>"OAmSKGgBjbqzJqEZHTQF", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.process.cpu.total.pct] cannot be changed from type [long] to [float]"}}}}

I didn't have data I needed to keep, so I stopped all the metricbeats on the network, then in Kibana I deleted metricbeat-* elasticsearch indices and kibana index patterns.
I ran metricbeat and filebeat dashboards setup (just to be sure it's ok).
Then I started all metricbeats.
All the dashboards all now ok and collecting correct data.

Hello

I am facing the same issue with 6.6.2.
My config : [metricbeat] -> [logstash -> ES]. Metricbeat hosts can't access to the ES service.
My system is quite simple and I followed guide without trouble.
I set up curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_template/metricbeat-6.6.2 -d@/tmp/metricbeat.template.json once but each day at midnight UTC on rotate I have this :

[2019-03-27T00:00:01,151][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [heartbeat-6.6.2-2019.03.27] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2019-03-27T00:00:01,856][INFO ][o.e.c.m.MetaDataMappingService] [e1] [heartbeat-6.6.2-2019.03.27/0tUxOUVMQUK0wNPywu_h6g] create_mapping [doc]
[2019-03-27T00:00:01,911][INFO ][o.e.c.m.MetaDataMappingService] [e1] [heartbeat-6.6.2-2019.03.27/0tUxOUVMQUK0wNPywu_h6g] update_mapping [doc]
[2019-03-27T00:00:02,813][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [.monitoring-es-6-2019.03.27] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [doc]
[2019-03-27T00:00:05,325][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [metricbeat-6.6.2-2019.03.27] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2019-03-27T00:00:05,579][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [.monitoring-kibana-6-2019.03.27] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [doc]
[2019-03-27T00:00:05,963][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] create_mapping [doc]
[2019-03-27T00:00:05,967][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] update_mapping [doc]
[2019-03-27T00:00:09,134][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] update_mapping [doc]
[2019-03-27T00:00:09,226][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] update_mapping [doc]
[2019-03-27T00:00:09,227][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [e1] failed to put mappings on indices [[[metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg]]], type [doc]
java.lang.IllegalArgumentException: mapper [system.process.cpu.total.pct] cannot be changed from type [long] to [float]

It looks like update_mapping process does not look at index template.
So the only workaround I found is a cron at midnight each day, including deleting all my data :

curl -XDELETE 'http://localhost:9200/metricbeat-*'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_template/metricbeat-6.6.2 -d@/tmp/metricbeat.template.json

I use filebeat and heartbeat on my hosts and I have no trouble with index template on those

@Raphyyy Judging by the failed to put mappings on indices log line, I'm guessing this is some sort of setup or config problem. You can ask for help on https://discuss.elastic.co .

Closing this issue based on the above.

I have a fresh setup of Elastic Stack 6.7 and encounter the exact same issue.
First the template was imported:

# metricbeat setup --template -E 'output.elasticsearch.hosts=["https://***:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=***' -E 'setup.kibana.host="https://***:5601"'
Loaded index template

Then the beat was enrolled with the system module active and the following extra configuration:

metricsets:
  - cpu
  - load
  - memory
  - network
  - process
  - process_summary
  - uptime
  - socket_summary
  - core
  - diskio
  - filesystem
  - fsstat
enabled: true
processes:
  - '.*'

Logstash immediatly throws the followin errors:

[2019-03-29T14:26:26,769][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-2019.03.29", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x155eaf4b>], :response=>{"index"=>{"_index"=>"metricbeat-2019.03.29", "_type"=>"doc", "_id"=>"2b-hyWkBEZkmuxpyTNip", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.filesystem.used.pct] cannot be changed from type [float] to [long]"}}}}
[2019-03-29T14:26:27,277][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-2019.03.29", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x1212e991>], :response=>{"index"=>{"_index"=>"metricbeat-2019.03.29", "_type"=>"doc", "_id"=>"4L-hyWkBEZkmuxpyTNiq", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.diskio.iostat.request.avg_size] cannot be changed from type [float] to [long]"}}}}

As this is a fresh installation with no special configurations I'm not sure if this is indeed a configuration error.
I also tried stopping Logstash, deleting all metricbeat-* indices and the template, importing the template again and starting Logstash.

Can we please take this to discuss? Happy to open a fresh issue if it turns out it's an actual bug. For LS config, make sure it looks like here: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html Including version in the index name ...

Yah, can't seem to reproduce this on two "clean" 6.7 installs, in cloud and docker.

Was this page helpful?
0 / 5 - 0 ratings