Beats: The error message "output.go:92: ERR Failed to publish events: temporary bulk send failure" is outputted endlessly

Created on 25 Jan 2018  路  21Comments  路  Source: elastic/beats

The error log is being outputted by filebeat endlessly, please see the following info. I am pretty sure I have already stopped generating the sample log entries, but the error messages keep being outputted.
The filebeat and Elasticsearch versions are both 6.1.1, and the OS is Oracle Linux 7.3.

2018/01/25 09:48:33.292858 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:33.293722 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:33.294410 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:34.295636 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:34.296659 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:34.297242 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:35.298297 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:35.298943 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:35.299706 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:36.300555 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:36.301267 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:36.302073 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:37.302949 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:37.303916 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:37.304602 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:38.305768 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:38.306746 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:38.307354 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:39.308613 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:39.309581 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:39.310300 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:40.311312 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:40.312356 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:40.312993 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:41.314178 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:41.315023 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:41.315638 load.go:73: INFO Template already exists and will not be overwritten.
2018/01/25 09:48:42.316675 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/01/25 09:48:42.317676 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/01/25 09:48:42.318206 load.go:73: INFO Template already exists and will not be overwritten.

discuss libbeat question

Most helpful comment

Hi, I faced with this issue starting from 7.0.0 version.
Solved it by replacing %{[beat.version]} to %{[agent.version]} in filebeat configuration file.
Now it is working fine!

All 21 comments

+1

Can one of you share the exact steps to reproduce this? I'm kind of surprise to see the constant "connected to ES" message. Any chance to provide the debug log output?

@ruflin Thanks for the response. But I have already replaced filebeat with fluentd more than 10 days ago, and I don't remember the steps any more:(

@ruflin Thanks for the response. I get this error too:
2018-03-06T10:34:09.716+0800 ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure
2018-03-06T10:34:09.719+0800 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.0.0
2018-03-06T10:34:09.722+0800 INFO template/load.go:73 Template already exists and will not be overwritten.

Here is my step, I install filebeat by yum. And use pipeline in filebeat.yml.
My log file: 2018-02-09T15:39:30.895+0800 INFO OGG-02756 Oracle GoldenGate Delivery for MySQL, rep1.prm: The definition for table -- is obtained from the trail file.
My pipeline: grok - "patterns" :["%{TIMESTAMP_ISO8601:operDate} %{LOGLEVEL:level} %{NOTSPACE:ogg} %{GREEDYDATA:info}"]

I also get below error:
{"type":"mapper_parsing_exception","reason":"Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}

@my1234567 Could you try running Filebeat with debug logging enabled and share the log. Can you please share the config you used as well. And if you are using a custom pipeline include that.

filebeat -e -d "*"

I ran into the exact same messages. I guess it's a good idea to take a look at elasticsearch logs. in my case, a bunch of IllegalArgumentException for Grok processor were in the log

I also got the same issue

2018-04-02T08:37:17.718Z    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s    {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":210,"time":216},"total":{"ticks":2210,"time":2224,"value":2210},"user":{"ticks":2000,"time":2008}},"info":{"ephemeral_id":"1326d1c9-bada-489b-8847-3a03f54059a5","uptime":{"ms":210087}},"memstats":{"gc_next":75523504,"memory_alloc":51789992,"memory_total":223178424,"rss":14344192}},"filebeat":{"events":{"added":2048,"done":2048},"harvester":{"open_files":5,"running":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":198,"active":1850,"batches":1,"total":2048},"read":{"bytes":39784},"write":{"bytes":4505554}},"pipeline":{"clients":1,"events":{"active":4117,"published":2048,"retry":2048,"total":2048},"queue":{"acked":2048}}},"registrar":{"states":{"current":20,"update":2048},"writes":1},"system":{"load":{"1":0.02,"15":0.15,"5":0.12,"norm":{"1":0.01,"15":0.075,"5":0.06}}}}}}
2018-04-02T08:37:19.694Z    ERROR   pipeline/output.go:92   Failed to publish events: temporary bulk send failure
2018-04-02T08:37:19.696Z    INFO    elasticsearch/client.go:690 Connected to Elasticsearch version 5.1.1
2018-04-02T08:37:19.701Z    INFO    template/load.go:73 Template already exists and will not be overwritten.

Steps to reporduce:
I just changed kube_config file and restarted filebeat service.

I'm facing the same problem. I'm using Filebeat 6.2.4 on Docker.
These message keeps showing up:

2018-04-23T08:24:37.891Z    ERROR   pipeline/output.go:92   Failed to publish events: temporary bulk send failure
2018-04-23T08:24:37.893Z    INFO    elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-04-23T08:24:37.894Z    INFO    template/load.go:73 Template already exists and will not be overwritten.

On the ES(6.2.4 also) side, it logs

[2018-04-23T07:17:57,018][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]

almost every second.

I think this issue has become a list of different issues / problems with similar outcomes.

@Hunsin In your case it seems a 5.x template is used for 6.x data. Not sure where this template is coming from but best have a look at the output of _template and check where the template key is still used.

If it helps, I have the same issue when I use Nginx modules. I've been struggling with it for two days now, I separated my filebeat daemonsets so it doesn't impact the logs for other applications, but on the nodes I have nginx running, I keep receiving those logs:

2018/04/24 19:20:14.471965 harvester.go:215: INFO Harvester started for file: /var/lib/docker/containers/ec6ae5241f166c3b1bf192db91d98d784c84fbb1f1593948e35a78cfb030b177/ec6ae5241f166c3b1bf192db91d98d784c84fbb1f1593948e35a78cfb030b177-json.log
2018/04/24 19:20:14.708222 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/04/24 19:20:14.709958 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/04/24 19:20:14.713811 load.go:73: INFO Template already exists and will not be overwritten.
2018/04/24 19:20:15.728418 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/04/24 19:20:15.730393 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/04/24 19:20:15.734783 load.go:73: INFO Template already exists and will not be overwritten.
2018/04/24 19:20:16.752144 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/04/24 19:20:16.754491 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/04/24 19:20:16.758855 load.go:73: INFO Template already exists and will not be overwritten.
2018/04/24 19:20:17.773793 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/04/24 19:20:17.776301 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/04/24 19:20:17.780962 load.go:73: INFO Template already exists and will not be overwritten.
2018/04/24 19:20:18.804705 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/04/24 19:20:18.808883 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/04/24 19:20:18.813167 load.go:73: INFO Template already exists and will not be overwritten.
2018/04/24 19:20:19.844690 output.go:92: ERR Failed to publish events: temporary bulk send failure
2018/04/24 19:20:19.846615 client.go:651: INFO Connected to Elasticsearch version 6.1.1
2018/04/24 19:20:19.850731 load.go:73: INFO Template already exists and will not be overwritten.
2018/04/24 19:20:20.867247 output.go:92: ERR Failed to publish events: temporary bulk send failure

@ruflin what field should I check with GET _template API? There are 5k lines.

{
  ".watch-history-7": {
    // ...
    "settings": {
      "index": {
        // ...
        "xpack": {
          "watcher": {
            "template": {
              "version": "7"
            }
          }
        }
      }
    },
    "mappings": {
      "doc": {
        "dynamic_templates": [ /* ... */ ]
      },
      // ...
    },
  },
  // ...
  "filebeat-6.2.4": {
    // ...
    "mappings": {
      "doc": {
        "dynamic_templates": [ /* ... */ ]
      }
    },
  },
  // ...
}

Or, did I miss something in config file?

filebeat.prospectors:
- type:     log
  enabled:  true
  paths:    /var/log/*.log
  pipeline: test-log

output.elasticsearch:
  hosts:    ["elastic:9200"]
  username: username
  password: password

setup.dashboards:
  enabled: true

setup.kibana:
  host: "kibana:5601"

@Hunsin Check the filebeat-6.2.4 pattern if it has either template or an index_patterns key inside.

@nelsonfassis For your case we would also need to config and best the ES. Can you please open a new topic on discuss with it: https://discuss.elastic.co/c/beats/filebeat

As there are too many people reporting issues in this thread and not a clear way how to reproduce it or missing data identifying the issue becomes harder and harder. Can I please ask anyone which things he has the same problem to create a new topic on discuss (https://discuss.elastic.co/c/beats/) and provide the following data there:

  • Beat logs
  • Beat version
  • Beat config
  • Elasticsearch logs
  • Details on how Beats has been upgraded in the past.

If someone has clear steps on how to reproduce the issue, please post it directly here.

From GET _template/filebeat-6.2.4:

{
  "filebeat-6.2.4": {
    "order": 1,
    "index_patterns": [ "filebeat-6.2.4-*" ], // <--- here
    // ...
    "mappings": {
      "doc": {
        "_meta": { "version": "6.2.4" },
        "date_detection": false,
        "dynamic_templates": [ // <--- and here, no more
          {
            "fields": {
              "path_match": "fields.*",
              "mapping": { "type": "keyword" },
              "match_mapping_type": "string"
            }
          },
          {
            "docker.container.labels": {
              "mapping": { "type": "keyword" },
              "match_mapping_type": "string",
              "path_match": "docker.container.labels.*"
            }
          },
          {
            "strings_as_keyword": {
              "mapping": {
                "ignore_above": 1024,
                "type": "keyword"
              },
              "match_mapping_type": "string"
            }
          }
        ],
      // ...
      }
    }
  }
}

@Hunsin Ok, that looks good from the filebeat perspective. So there must be some other templates you have that don't Use index_patterns but template there which causes the log entry.

For the steps to reproduce the template problem, I post it here.

Base on the steps above, I set up a pipeline by PUT _ingest/pipeline/my-log:

{
  "description": "my logging pattern",
  "processors": [
    {
      "grok": {
        "field": "message",
        "patterns": [ "%{TIMESTAMP_ISO8601:at} %{MSG:body}" ], // <--- changed later
        "pattern_definitions": {
          "MSG": ".+"
        }
      },
      "date": {
        "field": "at",
        "formats": [ "ISO8601" ], // <--- changed later
        "timezone": "UTC"
      }
    }
  ]
}

Then, create a log file

$ echo "$(date +%Y-%m-%dT%H:%M:%S) hello world" >> path/to/my.log

Everything goes well.

However, if I modify grok.patterns to [ "%{DATESTAMP_OTHER:at} %{MSG:body}" ] and date.formats to something unsupported like [ "DATESTAMP_OTHER" ], appending a new log:

$ echo "$(date) hello world" >> path/to/my.log

The error messages show up every second:

2018-04-30T18:04:40.447Z        ERROR   pipeline/output.go:92   Failed to publish events: temporary bulk send failure
2018-04-30T18:04:40.452Z        INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-04-30T18:04:40.455Z        INFO    template/load.go:73     Template already exists and will not be overwritten.

For any other who meet the same problem, what I found is, if:

  • The log file pattern doesn't match what you PUT _ingest/pipeline/my-log, or
  • Wrong configuration to the API

Then the error messages keep showing up. Try simulate pipeline API may help solve the problem.
Not sure if there's other reason leads to same error.

My suggestion is, a more detailed log may help us debug more easily?

@Hunsin If I get it right, the above is a configuration problem and is not directly related to the template issue? This is more a feature request for better logging in case the pipeline is invalid? I assume if you look at the elasticsearch logs, you get more details there.

Exactly. It seems not related to the template issue.
However no log was generated by Elasticsearch when Filebeat was crying.

@Hunsin Could we take this a separate Discuss issue to not make the thread longer and more confusing here?

Hi, I faced with this issue starting from 7.0.0 version.
Solved it by replacing %{[beat.version]} to %{[agent.version]} in filebeat configuration file.
Now it is working fine!

Worked for me too.

Was this page helpful?
0 / 5 - 0 ratings