Hi everyone. Based on this closed issue 8773 - Filebeat and Kubernetes pods with labels (helm deployments, etc), I guess it wasn't fixed at all. I'm getting the same behavior, even using the latest version of metricbeat (7.1.1) and config:
processors:
- add_kubernetes_metadata:
labels.dedot: true
annotations.dedot: true
When metricbeat tries to push out events like these below to Elasticsearch:
{
"@timestamp": "2019-06-21T14:30:06.333Z",
"@metadata": {
"beat": "metricbeat",
"type": "_doc",
"version": "7.1.1"
},
"ecs": {
"version": "1.0.0"
},
"cloud": {
"availability_zone": "us-east1-b",
"instance": {
"id": "123123213213",
"name": "gke-staging-preemptible-pool-xxxxxxx"
},
"machine": {
"type": "n1-standard-8"
},
"project": {
"id": "kubernetes-staging-220222"
},
"provider": "gcp"
},
"kubernetes": {
"deployment": {
"name": "shared-queue-preemptive",
"replicas": {
"unavailable": 0,
"updated": 1,
"desired": 1,
"available": 1
},
"paused": false
},
"namespace": "shared-queue",
**"labels": {
"app": {
"kubernetes": {
"io/instance": "shared-queue",
"io/managed-by": "Tiller",
"io/name": "shared-queue",
"io/version": "1.3.25"
}
}**,
"helm": {
"sh/chart": "microservice-0.1.2"
}
}
},
"metricset": {
"name": "state_deployment"
},
"service": {
"address": "kube-state-metrics.kube-system.svc.cluster.local:8080",
"type": "kubernetes"
},
"event": {
"module": "kubernetes",
"duration": 384732698,
"dataset": "kubernetes.deployment"
},
"host": {
"name": "gke-staging-preemptible-pool-xxxxxx"
},
"agent": {
"hostname": "gke-staging-preemptible-pool-xxxxxxx",
"id": "d6e948a1-419e-440c-bdd3-c110209e5942",
"version": "7.1.1",
"type": "metricbeat",
"ephemeral_id": "8810147c-0e44-4583-a663-b91c788309c4"
}
}
Obviously because these labels:
"kubernetes": {
"deployment": {
"name": "shared-queue-preemptive",
"replicas": {
"unavailable": 0,
"updated": 1,
"desired": 1,
"available": 1
},
"paused": false
},
"namespace": "shared-queue",
"labels": {
"app": {
"kubernetes": {
"io/instance": "shared-queue",
"io/managed-by": "Tiller",
"io/name": "shared-queue",
"io/version": "1.3.25"
}
},
"helm": {
"sh/chart": "microservice-0.1.2"
}
}
}
Thanks @nerddelphi, can you also paste the error message you saw here please?
@kaiyan-sheng These errors:
{"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.labels.app] of type [keyword] in document with id '6OPyamsBGJCb4HgtJtQp'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:264"}}
and
{"type":"mapper_parsing_exception","reason":"object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}
@nerddelphi Thank you! I suspect it's because how we define kubernetes.labels https://github.com/elastic/beats/blob/7.1/libbeat/processors/add_kubernetes_metadata/_meta/fields.yml#L31 here. Will take it into investigation and update with whatever I find.
@kaiyan-sheng ok. I appreciate it and I hope we can fix that :)
got the same, is there any fix / workaround available yet? the filebeat fixes dont work for metricbeat (drop fields, rename)
I'm having this error yet. Metricbeat 7.6.0 monitoring GKE. Any clues here?
{"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.pod.labels.app] of type [keyword] in document with id 'MECzcnABNHSy86_GVDf1'. Preview of field's value: '{kubernetes={io/instance=vault, io/name=vault}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:338"}}
I'm having this error yet. Metricbeat 7.6.0 monitoring GKE. Any clues here?
{"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.pod.labels.app] of type [keyword] in document with id 'MECzcnABNHSy86_GVDf1'. Preview of field's value: '{kubernetes={io/instance=vault, io/name=vault}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:338"}}
Having the same issue on AWS EKS:
{"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.labels.statefulset] of type [keyword] in document with id 'HfFwzXABACATvuNpI5wp'. Preview of field's value: '{kubernetes={io/pod-name=elastic-operator-0}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:795"}}
Please, re-open this issue.
Here is an example.
The beat v7 template
{
"kubernetes.labels.*" : {
"path_match" : "kubernetes.labels.*",
"mapping" : {
"type" : "keyword"
},
"match_mapping_type" : "*"
}
},
Some k8s containers have these kinds of labels
{
"labels": {
"controller-revision-hash": "87dd747db",
"openshift": {
"io/component": "network"
},
"type": "infra",
"component": "network",
"pod-template-generation": "6",
"app": "multus"
}
}
Inserting this triggers a mapping error
org.elasticsearch.index.mapper.MapperParsingException: failed to parse field [kubernetes.labels.openshift] of type [keyword] in document with id 'I4ET43ABhUdCGXLHKhvu'. Preview of field's value: '{io/component=network}'
The issues on 7.6 should be fixed by https://github.com/elastic/beats/pull/16834 and https://github.com/elastic/beats/pull/16857 that will be released on 7.6.2.
Sorry, https://github.com/elastic/beats/pull/16857 is only backported to 7.7.0 by now.
My issue was not related to this bug. I added the setting "labels.dedot: true" to solve it. Sorry for the noise.
A fix for https://github.com/elastic/beats/pull/16857 will be released also on 7.6.2 at the end (https://github.com/elastic/beats/pull/17020).
Most helpful comment
I'm having this error yet. Metricbeat 7.6.0 monitoring GKE. Any clues here?
{"type":"mapper_parsing_exception","reason":"failed to parse field [kubernetes.pod.labels.app] of type [keyword] in document with id 'MECzcnABNHSy86_GVDf1'. Preview of field's value: '{kubernetes={io/instance=vault, io/name=vault}}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:338"}}