Edit: using version 1.5.4
Is there a way to ping Logstash for its health? I am using Logstash behind an Amazon ELB which needs to pass a health check. Currently it is set up to attempt to open a tcp connection on port 5000 (which is where Logstash is running) but It consistently fails.
I am looking for a way to return a http 200 status if possible or a plugin/command line option to enable this to happen.
Is this possible?
For now the best you can do is to use https://github.com/logstash-plugins/logstash-input-heartbeat, but I agree adding something like this would be nice!
I guess this would be added in a feature release near to 2.0 where the api stuff gets in.
I have that but how can that be mapped so that the ELB can access it? Currently it just send the OK message to Elastic.
The ELB has to have a path defined (TCP or HTTP) which it can connect to and return a success (e.g. 200 status code) but I am not sure how to implement this.
Setting the ELB to look somewhere else for the health check such as Kibana or Elasticsearch would be redundant as it gives no indication to the health of Logstash directly.
The TCP health checker looks for an open connection on such port, which I can connect to but for some reason the ELB` cannot.
+1 any idea on this one?
@riguy724 This feature will be added in a future version of Logstash. It will arrive when Logstash has a pollable, REST API.
See our roadmap for more details.
as for filebeat also in logstash a simple method for monitoring is needed. out of the box, no extra plugins, no big manual parsing.
don't think this is a question, think it could be called feature request.
This is already possible with LS 5.0.0-alpha1. There is a GET localhost:9600
logstash 5.0.0 is not stable now and some of plugins are not supported now.
In my case, http input plugin was very helpful: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html
input {
http {
port => 8000
type => "elb-healthcheck"
}
}
It allows logstash to access http://host:8000 and it returns "ok" with status 200. It is perfect to use in elb. I also added filter to drop all inputs from "elb-healthcheck".
filter {
if [type] == "elb-healthcheck" {
drop { }
}
}
If so, You can use /:8000 as a health check url for aws elb.
@w4-sglim Why isn't 5.0.0 stable? Our alphas and betas are fully tested. Let us know if there is a specific issue you dealt with 5.0.0
@suyograo if a software has an "alpha" tag on it, i would wouldn't call it stable and i wouldn't use it in production environment. also for my understanding, in an alpha software things can still change, so i also would wait for a final release before using LS 5.0.0-alphax
@suyograo I thought that alpha tag means much unstable than beta. I'm really big fan of logstash and willing to use newest version but I hesitate to use 5.0.0-alpha because the current stable version is 2.4 until latest tag is on it. Some plugin needs logstash-core which has major version 2. I am looking forward to get a 5.0.0-stable version and some of my favorite plugins support it fully.
The solution that I suggested was a word-around and sometimes it may be useful to someone who has version dependency on 2.4.
@c33s Agreed. Of course, we do not recommend using an alpha or beta release in production! But what @suyograo meant is that each of our alpha releases is functional in our tests. None have been unusable.
@w4-sglim alpha doesn't necessarily mean _unstable_. It means "pre-release software, subject to further changes before it's actually released." This can mean that there are aspects which are unpolished, or that it will crash under unforeseen circumstances. We appreciate everyone who tests it and reports back when these things happen.
@untergeek it might be that your software is rocksolid even its alpha but this ticket is about monitoring logstash.
who normally wants to monitor? companies or people who use software in production environment.
Normally, I wouldn't dare to jump from version 2.4 to 5.0 alpha. The difference sounds huge.
What happened to Logstash 3.0 and 4.0? I am more willing to try a smaller step.
What happened to Logstash 3.0 and 4.0? I am more willing to try a smaller step.
There aren't any. In order for all parts of the Elastic Stack to be on a unified version number, all of them are jumping to 5.0 (because Kibana was already on 4.x). So 5.0 _is_ what comes after 2.x. It is a major version release, but you shouldn't have to change your configuration file too much. Most plugins should maintain the same configuration arguments, with a few exceptions where improvements have been made.
Is there perhaps an up-to-date answer to this question: can we now ping the health of LogStash OOTB?
I found this: https://www.elastic.co/guide/en/logstash/current/monitoring.html
By default Logstash has a monitoring API on port 9600 (or 9600-9700) however, it will only work locally unless you bind a hostname in logstash.yml:
http.host: "mylogstash.mydomain.com"
It'll just work wonders with
http.host: "0.0.0.0"
Tested on kubernetes probes.
Yes, you can bind the API to public interfaces, but it is bound to localhost by default for good reasons; it performs neither authentication nor authorization, and can be used by a malicious party to perform DOS (e.g., logging is configurable via the API, and trace-level logging can flood disk).
Most helpful comment
logstash 5.0.0 is not stable now and some of plugins are not supported now.
In my case, http input plugin was very helpful: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html
It allows logstash to access http://host:8000 and it returns "ok" with status 200. It is perfect to use in elb. I also added filter to drop all inputs from "elb-healthcheck".
If so, You can use /:8000 as a health check url for aws elb.