I'm using the logstash docker image, version 5.2.2, pulled from docker.elastic.co about an hour ago.
The default config, as far as I can tell, does not include any mention of using Elasticsearch:
$ docker run --rm docker.elastic.co/logstash/logstash:5.2.2 cat /usr/share/logstash/pipeline/logstash.conf
input {
beats {
port => 5044
}
}
output {
stdout {
codec => rubydebug
}
}
Yet, when I run logstash with the default config, I get complaints from the Elasticsearch plugin that it can't perform healthchecks to a non-existent ES node running on localhost:
$ docker run --rm docker.elastic.co/logstash/logstash:5.2.2
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2017-03-26T20:52:19,927][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2017-03-26T20:52:19,944][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"2c26b491-338b-436c-9dd2-4cfd22de843f", :path=>"/usr/share/logstash/data
/uuid"}
[2017-03-26T20:52:20,492][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/mon
itoring/?system_id=logstash&system_api_version=2&interval=1s]}}
[2017-03-26T20:52:20,493][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elastic
search:9200/, :path=>"/"}
log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2017-03-26T20:52:20,733][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x383e0b13 URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2017-03-26T20:52:20,734][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x2d2d84f6 URL:http://elasticsearch:9200>]}
[2017-03-26T20:52:20,736][INFO ][logstash.pipeline ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-03-26T20:52:20,738][INFO ][logstash.pipeline ] Pipeline .monitoring-logstash started
[2017-03-26T20:52:20,750][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2000}
[2017-03-26T20:52:21,158][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-03-26T20:52:21,192][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-26T20:52:21,284][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-03-26T20:52:25,737][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-03-26T20:52:25,751][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x68ed49c6 URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
Those log messages, which repeat every few seconds, make it harder to see real problems, can confuse people whose modified logstash.conf isn't getting picked up (which is what caught me), and will probably fill up disks.
@mpalmer it looks like you have x-pack installed and sending metrics to Elasticsearch.
Ideally we would have a better log message clarifying that fact. You can tell by the URL, but maybe it'd be nice to prefix all xpack log messages? WDYT @ph?
@andrewvc I will create a specific issue for that in the
@andrewvc Actually we should add "pipeline" context in the log message at least the id of the pipeline, @jsvd I think you did some work or testing on that?
@andrewvc I'm not sure what x-pack is, and I don't recall ever installing it. Googling leads me to this page, which doesn't ring any bells, either. If I've got it installed, I presume it was done by default in the Docker image, which is the stock one provided by Elastic. Should I open a bug on that repo asking to not install x-pack by default?
this problem is being discussed in the logstash docker image repo, more specifically in https://github.com/elastic/logstash-docker/issues/15
Thanks for the pointer to that issue, @jsvd; I missed it in my initial sweep before I reported this because (at the time) I didn't know what x-pack was.
Given that my issue, specifically, is x-pack (and docker image) specific, I'm fine with closing this bug off if nobody else wants it to track the other stuff that's been mentioned up-thread.
Thanks all. A mechanism for dealing with this just shipped in version 5.3.0 of the Docker image. You can now set an environment variable to disable X-Pack Monitoring. Like:
docker run -e xpack.monitoring.enabled=false docker.elastic.co/logstash/logstash:5.3.0
Closing this based on https://github.com/elastic/logstash/issues/6842#issuecomment-289924657
Most helpful comment
Thanks all. A mechanism for dealing with this just shipped in version 5.3.0 of the Docker image. You can now set an environment variable to disable X-Pack Monitoring. Like: