Logstash: Add ability to enable/disable pipelines

Created on 13 Aug 2017  路  12Comments  路  Source: elastic/logstash

In Logstash 6.0, we can add or remove pipelines.
It would be great to be able to disable a pipeline (without removing it and all its configuration), so that we can re-enable it later.
This is particularly useful for pipelines that we want to activate not always, but on demand for a limited time.
Obviously, this feature would be really great if it can be done using 脿 rest API or something like that, so that it doesn't require a Logstash restart.

enhancement multiple pipelines

Most helpful comment

I just came across this issue while trying to determine if this was possible. The use-case is running Logstash and associated configs in a docker container. The container could have up to 35 pipelines enabled. However not every instance of the container needs all of the pipelines (depending on the environment and scale requirements).

If a config option such as pipeline.enable was available, it could be used together with environment variables (another needed feature requested here) like this...

- pipeline.id: barracuda
  pipeline.enable: ${LS_BARRACUDA_ENABLE}
  pipeline.workers: ${LS_BARRACUDA_WORKERS}
  path.config: "/etc/logstash/pipelines/barracuda/*.conf"

- pipeline.id: forcepoint
  pipeline.enable: ${LS_FORCEPOINT_ENABLE}
  pipeline.workers: ${LS_FORCEPOINT_WORKERS}
  path.config: "/etc/logstash/pipelines/barracuda/*.conf"

The docker_compose.yml file which starts the container would then be able to provide control of which pipelines are started by defining environment variables. For example...

environment:
  LS_BARRACUDA_ENABLE: false
  LS_BARRACUDA_WORKERS: 2

  LS_FORCEPOINT_ENABLE: true
  LS_FORCEPOINT_WORKERS: 4

This would eliminate the need to produce a container for every possible combination of pipelines.

All 12 comments

this feature request aside, I must remind that multiple pipelines are compatible with the dynamic config reloading. so if logstash is started with -r, adding/removing a new entry to the pipelines.yml will add (and start) or remove (and stop) a pipeline.

So comment/uncomment one pipeline config in "pipelines.yml" would enable/disable a pipeline ?

@fbaligand it will stop the pipeline and delete it internally, uncommenting will add and start it

Ok thank you !

However, I maintain the feature request, because I'm very interested to be able to enable/disable (or start/stop) a pipeline using a script command.

@jsvd it would be great if it was exposed as an api endpoint to stop pipelines. Otherwise its a CM tool exercise. The use case is more apparent when you're running a large pool of logstash hosts and want to temporarily disable a pipeline while doing other maintenance, or reduce the number of concurrent pipelines to reduce threads or pressure on ES. Making CM changes usually requires additional approval and if you're managing CM with a vcs, then it equates to a production code change.

I concede this is a convenience feature.

I would not concede this as a convenience feature - production systems (by best practice if not policy) would never a) change a configuration file on the fly or b) rename a file on a server to stop/start that service. Wholly un-scalable not to mention. So..bump?

Sweet! I say usually to 'believe in the roadmap' and assume something like this is on the way (cuz why not). I'm not familiar, but enable/disable through Kibana, then through 6.6 ES APIs or such, is exactly what is requested, at issue. I was replying to the comment I guess, and these alternatives are what we will try using in the meantime. An API or feature on-the-way is enough for me (and customer)! 馃憣 Thanks for the quick reply @fbaligand! Is there a GH issue we can track and link here/close this issue perhaps?

Sorry, I just removed my previous comment as it is totally out of context.
I misunderstood your previous comment.

And so, no, to my knowledge, there is no kibana or elasticsearch feature that allows to solve this issue.
Sorry.

For us, Pipelines are proving useful for isolating logs based on certain conditions, performing actions and then outputting them in a certain way, while keeping an eye on the monitoring stats.

A pipeline in the real world would have the ability to stop the flow upstream in order to carry out some maintenance. In Elasticsearch, when performing tasks like e.g. mapping updates, it's useful to stop ingestion long enough for the changes to be made, so new indices don't automatically get created.

While we _can_ fiddle with commenting out lines in YAML files, that feels inconsistent with the RESTful way of doing things that the Elastic stack is known for, and as mentioned above it doesn't scale.

FWIW I have the same gripe with vanilla Logstash (i.e. the main pipeline); stopping the entire service when you need to make a mapping update always feels like using a sledgehammer to crack a nut. Maybe I'm doing it wrong, but at least afaik live config reloading isn't suitable for this.

It's especially relevant now as we have a separate pipeline for migrating to ECS, and we don't want this to interfere with ingestion on the main pipeline

+1000 for @rozling comment!
In my case, I have some pipelines that I activate only for few hours and then need to disable it.
I would love a rest api to enable/disable a pipeline!

I just came across this issue while trying to determine if this was possible. The use-case is running Logstash and associated configs in a docker container. The container could have up to 35 pipelines enabled. However not every instance of the container needs all of the pipelines (depending on the environment and scale requirements).

If a config option such as pipeline.enable was available, it could be used together with environment variables (another needed feature requested here) like this...

- pipeline.id: barracuda
  pipeline.enable: ${LS_BARRACUDA_ENABLE}
  pipeline.workers: ${LS_BARRACUDA_WORKERS}
  path.config: "/etc/logstash/pipelines/barracuda/*.conf"

- pipeline.id: forcepoint
  pipeline.enable: ${LS_FORCEPOINT_ENABLE}
  pipeline.workers: ${LS_FORCEPOINT_WORKERS}
  path.config: "/etc/logstash/pipelines/barracuda/*.conf"

The docker_compose.yml file which starts the container would then be able to provide control of which pipelines are started by defining environment variables. For example...

environment:
  LS_BARRACUDA_ENABLE: false
  LS_BARRACUDA_WORKERS: 2

  LS_FORCEPOINT_ENABLE: true
  LS_FORCEPOINT_WORKERS: 4

This would eliminate the need to produce a container for every possible combination of pipelines.

Was this page helpful?
0 / 5 - 0 ratings