When the include_fields processor is used in filebeat and the beat field is not explicitly included in the list of fields, the Elasticsearch output fails to index any documents with what looks like a temporary bulk indexing error. When debug logging is enabled, the error ends up being the following:
Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
After some investigation, I realized, that the problem was caused by the fact, that the default index name template includes the %{[beat.version]} part, which could not be generated when include_fields processor removes the beats field.
I think we should at least update the documentation for both include_fields and drop_fields to warn people about the problem with dropping the beat field. Ideally, logging and error handling should be improved as well to give the user some feedback about what's happening.
Filebeat Version: 6.2.2
Elasticsearch Version: 6.3.1
OS: CentOS 6
Steps to reproduce the issue:
One more case where this kind of issue has happened: https://discuss.elastic.co/t/output-with-index-got-error/142873
I couldn't reproduce this problem in FB 6.2.2 or master. In both cases, the ES index name is resolved based on the value in the beat.Info structure, not on fields present in a particular event.
Hi, I faced with this issue starting from 7.0.0 version.
Solved it by replacing %{[beat.version]} to %{[agent.version]} in filebeat configuration file.
Now it is working fine!
The error is cause by having an empty index name because substitution failed. Closing, because it is a duplicate of #9871
Most helpful comment
Hi, I faced with this issue starting from 7.0.0 version.
Solved it by replacing
%{[beat.version]}to%{[agent.version]}in filebeat configuration file.Now it is working fine!