The heuristics used to reconstruct the message from the documents created by the official filebeat modules should support all kinds of log events.
Known issues with pre-ECS formats are covered by the following issues:
Compatibility with various modules in ECS format has been improved in #31120.
Pinging @elastic/infrastructure-ui
Same thing for nginx error log (access log is fine).
Here is my example
I used add log data
, followed the instruction for MySQL in Kibana
Added Filebeat MySQL Module to
MySQL ver. 5.7.24-0
ubuntu0.18.04.1
Sample Log Lines
2018-12-07T02:19:36.564599Z 29 [Note] Access denied for user 'petclinicdd'@'47.153.152.234' (using password: YES)
2018-12-07T02:19:38.607311Z 30 [Note] Access denied for user 'petclinicdd'@'47.153.152.234' (using password: YES)
What the line file look like in discover
What the look like in log viewer
2018-12-06 18:19:36.564 failed to format message from /var/log/mysql/error.log
2018-12-06 18:19:38.607 failed to format message from /var/log/mysql/error.log
Same for Logstash module:
failed to format message from /var/log/logstash/logstash-plain.log
This is impacting Filebeat as well. I'm getting this when I beat over logs from IIS. It's producing a LOT of logs, which impacts space within the deployment also...
Seeing the same here with FileBeat, mysql and nginx error logs are all 'failed to format' but no problems with nginx access logs. Are there any temporary fixes for this?
+1 urgency
@weltenwort @welderpb
Having same issue with Filebeat for Logstash module.
Not sure if this is the issue or not. I found the filebeat modules will use the field log
instead of message
. In logstash I added a mutate to rename the log
field to message
then they started to show Kibana Logs.
Not sure if this is the issue or not. I found the filebeat modules will use the field
log
instead ofmessage
. In logstash I added a mutate to rename thelog
field tomessage
then they started to show Kibana Logs.
Did the trick! Thanks @jasonsattler
I'm adding rules for MySQL slow and error logs via https://github.com/elastic/kibana/pull/28219
Not sure if this is the issue or not. I found the filebeat modules will use the field
log
instead ofmessage
. In logstash I added a mutate to rename thelog
field tomessage
then they started to show Kibana Logs.
@jasonsattler
Could you show your solution? im facing the same problem with filebeat not processing logs from kibana pod
You should be able to rename the field either in filebeat or logstash.
In filebeat just add the following to your prospectors:
processors:
- rename:
fields:
- from: "log"
- to: "message"
Or in logstash use mutate in your filters
filter {
mutate {
rename => { "log" => "message" }
}
}
Tried with filebeat, didnt work is it added to the filebeat.yml or kubernetes.yml my configs:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
path: ${path.config}/inputs.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
- drop_fields:
when:
has_fields: ['kubernetes.labels.app']
fields:
- 'kubernetes.labels.app'
- rename:
fields:
- from: "log"
to: "message"
output.elasticsearch:
hosts: ['http://elasticsearch.whitenfv.svc.cluster.local:9200']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
json.keys_under_root: false
json.add_error_key: false
json.ignore_decoding_error: true
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: {{ filebeat_image_full }}
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
@paltaa you are missing -
before the to
processors:
- add_kubernetes_metadata:
in_cluster: true
- rename:
fields:
- from: "log"
- to: "message"
@jasonsattler Did it, and still im getting these errors
2019-01-14 15:19:32.726
failed to format message from /var/lib/docker/containers/f6883893ebb064518104318835d88e6c6fb9077f5a9369922066e9b004d9ee0f/f6883893ebb064518104318835d88e6c6fb9077f5a9369922066e9b004d9ee0f-json.log
I have a logs-prod
index which look like this:
{
"raw": "No such bean definition found to exist",
"source": "console",
"timestamp": "2018-09-24T04:42:51.478Z"
...
}
kibana.yml:
xpack.infra.sources.default.logAlias: "logs-*"
xpack.infra.sources.default.fields.timestamp: "timestamp"
xpack.infra.sources.default.fields.message: ['raw']
Result in Logs UI:
failed to format message from console
There is a problem with the message
setting not working correctly, sorry :see_no_evil: the only workaround right now is to move or copy the message to the message
field during ingestion or reindexing. We're working on fixing and improving the configurability.
Filebeat for Elasticsearch. The problem appear when i search the IIS log at kibana
Known issues with pre-ECS formats are covered by the following issues:
Many problems have been addressed via #30398 and #31120. Please feel free to open separate issues for other problems with particular modules.
Most helpful comment
Same for Logstash module:
failed to format message from /var/log/logstash/logstash-plain.log