Logstash: [ERROR][logstash.pipeline] A plugin had an unrecoverable error. Will restart this plugin

Created on 18 Nov 2016  Â·  12Comments  Â·  Source: elastic/logstash

I have logstash-5.0.1-1.noarch installed in redhat server. I have ELK in one server and filebeat installed in another server. Filebeat config looks good and no errors. It is pushing the logs to logstash server. From logstash to Eleasticsearch the data is not pushed to ES database. The error is shown when logstash pipleline is started.
Here is the configuration, please help figure out the issue.

The below is the /etc/logstash/logstash.yml

path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
log.level: debug
path.logs: /var/log/logstash
 cat /etc/logstash/conf.d/logstash.conf
input {
  beats {
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => ["elasticserver.domain.name:9200"]
    manage_template => false
    index => "filebeat-%{+YYYY.MM.dd}"
    user => elasticuser
    password => elasticpassword
  }
}

Error output from /var/log/logstash/logstash-plain.log

[2016-11-18T14:11:05,138][DEBUG][logstash.agent ] starting agent
[2016-11-18T14:11:05,139][DEBUG][logstash.agent ] starting pipeline {:id=>"main"}
[2016-11-18T14:11:05,142][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2016-11-18T14:11:05,155][DEBUG][io.netty.channel.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 16
[2016-11-18T14:11:05,177][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.noKeySetOptimization: false
[2016-11-18T14:11:05,177][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
[2016-11-18T14:11:05,183][DEBUG][logstash.outputs.elasticsearch] Normalizing http path {:path=>nil, :normalized=>nil}
[2016-11-18T14:11:05,192][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2016-11-18T14:11:05,204][DEBUG][io.netty.util.internal.JavassistTypeParameterMatcherGenerator] Generated: io.netty.util.internal.__matchers__.org.logstash.beats.BatchMatcher
[2016-11-18T14:11:05,227][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.processId: 11821 (auto-detected)
[2016-11-18T14:11:05,230][DEBUG][io.netty.util.NetUtil ] Loopback interface: lo (lo, 127.0.0.1)
[2016-11-18T14:11:05,231][DEBUG][io.netty.util.NetUtil ] /proc/sys/net/core/somaxconn: 128
[2016-11-18T14:11:05,232][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:b2:06:ef (auto-detected)
[2016-11-18T14:11:05,399][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://~hidden~:[email protected]:9200"]}}
[2016-11-18T14:11:05,400][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["myelkserver.domain.name:9200"]}
[2016-11-18T14:11:05,403][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2016-11-18T14:11:05,407][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-18T14:11:05,415][DEBUG][logstash.agent ] Starting puma
[2016-11-18T14:11:05,416][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2016-11-18T14:11:05,418][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,434][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9601}
[2016-11-18T14:11:05,435][DEBUG][logstash.api.service ] [api-service] start
[2016-11-18T14:11:05,438][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
[2016-11-18T14:11:05,536][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:05 +0100}
[2016-11-18T14:11:06,545][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:06 +0100}
[2016-11-18T14:11:07,547][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:07 +0100}
[2016-11-18T14:11:08,551][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:08 +0100}
[2016-11-18T14:11:09,553][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:09 +0100}
[2016-11-18T14:11:10,408][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
[2016-11-18T14:11:10,554][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:10 +0100}
[2016-11-18T14:11:11,557][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:11 +0100}
[2016-11-18T14:11:11,695][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: , host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: Address already in use

Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
sun.nio.ch.ServerSocketAdaptor.bind(sun/nio/ch/ServerSocketAdaptor.java:74)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:125)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:554)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1258)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:511)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:496)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:980)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:250)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:363)
io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(io/netty/util/concurrent/SingleThreadEventExecutor.java:451)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:418)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:401)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:877)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(io/netty/util/concurrent/DefaultThreadFactory.java:144)
java.lang.Thread.run(java/lang/Thread.java:745)
[2016-11-18T14:11:12,560][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-18 14:11:12 +0100}
[2016-11-18T14:11:12,695][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2016-11-18T14:11:12,705][WARN ][io.netty.channel.AbstractChannel] Force-closing a channel whose registration task was not accepted by an event loop: [id: 0x6ab3d2e5]
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:840) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:342) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:335) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:765) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:475) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:80) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:74) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:85) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:330) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:281) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:277) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:259) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at org.logstash.beats.Server.listen(Server.java:70) [logstash-input-beats-3.1.8.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_73]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_73]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_73]
at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_73]
at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:436) [?:?]
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:300) [?:?]
at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:35) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:134) [?:?]
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:60) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) [?:?]
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) [?:?]
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182) [?:?]
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:168) [?:?]
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221) [?:?]
at org.jruby.ast.RescueNode.interpret(RescueNode.java:116) [?:?]
at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96) [?:?]
at org.jruby.ast.BeginNode.interpret(BeginNode.java:83) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) [?:?]
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) [?:?]
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182) [?:?]
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326) [?:?]
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170) [?:?]
at org.jruby.ast.FCallOneArgNode.interpret(FCallOneArgNode.java:36) [?:?]
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105) [?:?]
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:112) [?:?]
at org.jruby.runtime.Interpreted19Block.evalBlockBody(Interpreted19Block.java:206) [?:?]
at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:194) [?:?]
at org.jruby.runtime.Interpreted19Block.call(Interpreted19Block.java:125) [?:?]
at org.jruby.runtime.Block.call(Block.java:101) [?:?]
at org.jruby.RubyProc.call(RubyProc.java:300) [?:?]
at org.jruby.RubyProc.call(RubyProc.java:230) [?:?]
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:99) [?:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
[2016-11-18T14:11:12,708][ERROR][io.netty.util.concurrent.DefaultPromise.rejectedExecution] Failed to submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:840) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:342) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:335) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:765) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:767) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:435) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:111) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1062) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:486) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:80) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:74) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:85) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:330) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:281) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:277) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:259) [netty-all-4.1.3.Final.jar:4.1.3.Final]

Most helpful comment

Something else that can cause the issue, is if you have two identical input configuration in your logstash config

input {
    beats {
        port => 5044
    }
}

twice, it will cause the same error

All 12 comments

looking at Error: Address already in use, are you sure there isn't more than one logstash instance running? check with something like ps axf | grep logstash

Thanks. Two instances were running. Its fixed now. I will update the ticket.

Thank you.

On Nov 18, 2016 10:26 PM, "João Duarte" [email protected] wrote:

looking at Error: Address already in use, are you sure there isn't more
than one logstash instance running? check with something like ps axf |
grep logstash

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/elastic/logstash/issues/6279#issuecomment-261582565,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARZWJWDyZmBtYKbeQs0juGW0etQBrvF3ks5q_dirgaJpZM4K2fro
.

we should shutdown gracefully or provide a better error message when the port is in use. The stacktrace is not useful to our users. I've created an issue here: https://github.com/logstash-plugins/logstash-input-beats/issues/161

You are right. I was not aware of logstash using initctl. so used the kill
command.

Thanks for the information.

regards,
Arun

On Tue, Nov 22, 2016 at 11:29 PM, Suyog Rao notifications@github.com
wrote:

we should shutdown gracefully or provide a better error message when the
port is in use. The stacktrace is not useful to our users. I've created an
issue here: logstash-plugins/logstash-input-beats#161
https://github.com/logstash-plugins/logstash-input-beats/issues/161

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/elastic/logstash/issues/6279#issuecomment-262316069,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARZWJUugK9hMXh6miF_ONobUdS8UoNIEks5rAy18gaJpZM4K2fro
.

Something else that can cause the issue, is if you have two identical input configuration in your logstash config

input {
    beats {
        port => 5044
    }
}

twice, it will cause the same error

Thanks Julien.

Two instances was running, after killing one instance it worked like charm.
Thanks for email.

Regards,
Arun

On Wed, Jan 4, 2017 at 9:06 PM, Julien Vey notifications@github.com wrote:

Something else that can cause the issue, is if you have two identical
input configuration in your logstash config

input {
beats {
port => 5044
}
}

twice, it will cause the same error

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/elastic/logstash/issues/6279#issuecomment-270400288,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARZWJViYc5z02LGuPfTVlKOW8_TCceyIks5rO7xngaJpZM4K2fro
.

João Duarte

Two instances were running ,used kill command ,it is working fine now .

thank you

Hi, Maybe a newbie question.
We are setting up ELK pipeline and facing an error on logstash similar to the one posted on this page except that "Error: event executor terminated" is the actual error. What does this mean? Any help is appreciated.

][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: , host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_tim eout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA 384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA 384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: event executor terminated

@suraj1538
If your using linux OS , type the following command and check weather 2 instance of logstash are running
"ps -aux|grep logstash"

if two instances are running then kill the process and restart logstash server

getting the same issue .. any suggestion.

logstash will read *.conf.bak files if you use path.config

If anyone stumbles upon this, I had a logstash.conf.save file generated by the nano editor, and that was causing logstash to read the same config twice, which caused the address already in use error.

Edit: it does not seem to be related to the filename, just having a file in the pipelines folder was enough to get logstash to read it, probably because I'm using https://github.com/deviantony/docker-elk

Was this page helpful?
0 / 5 - 0 ratings