Hi all,
I have yet another out of Memory error. Im not sure, if it is the same issue, as one of those, which are allready open, so i opened another issue:
Here is some Information:
Logstash.err
/var/log/logstash# less logstash.err
Exception in thread "Ruby-0-Thread-69: /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:78" java.lang.ArrayIndexOutOfBoundsException: -1
at org.jruby.runtime.ThreadContext.popRubyClass(ThreadContext.java:702)
at org.jruby.runtime.ThreadContext.postYield(ThreadContext.java:1266)
at org.jruby.runtime.ContextAwareBlockBody.post(ContextAwareBlockBody.java:29)
at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:198)
at org.jruby.runtime.Interpreted19Block.call(Interpreted19Block.java:125)
at org.jruby.runtime.Block.call(Block.java:101)
at org.jruby.RubyProc.call(RubyProc.java:300)
at org.jruby.RubyProc.call(RubyProc.java:230)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:99)
at java.lang.Thread.run(Thread.java:745)
Error: Your application used more memory than the safety cap of 4G.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
Logstash.out
> tail logstash.log
{:timestamp=>"2016-03-09T10:10:50.359000+0100", :message=>"Beats input: unhandled exception", :exception=>#<SystemCallError: Unknown error - Daten脙录bergabe unterbrochen (broken pipe)>, :backtrace=>["org/jruby/RubyIO.java:1339:in `syswrite'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:438:in `send_ack'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:421:in `ack_if_needed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:404:in `read_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:261:in `json_data_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:178:in `feed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:311:in `compressed_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:178:in `feed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:311:in `compressed_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:178:in `feed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:389:in `read_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/lumberjack/beats/server.rb:369:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/logstash/inputs/beats_support/connection_handler.rb:33:in `accept'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/logstash/inputs/beats.rb:177:in `handle_new_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/logstash/inputs/beats_support/circuit_breaker.rb:42:in `execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/logstash/inputs/beats.rb:177:in `handle_new_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.1.3/lib/logstash/inputs/beats.rb:133:in `run'"], :level=>:error}
{:timestamp=>"2016-03-09T10:10:50.362000+0100", :message=>"Beats Input: Remote connection closed", :peer=>"172.28.29.202:21053", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: EOFError, End of file reached>, :level=>:warn}
{:timestamp=>"2016-03-09T10:13:10.040000+0100", :message=>"execution expired", :class=>"MultiJson::ParseError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.8/lib/jrjackson/jrjackson.rb:87:in `is_time_string?'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.8/lib/jrjackson/jrjackson.rb:85:in `is_time_string?'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.8/lib/jrjackson/jrjackson.rb:34:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapters/jr_jackson.rb:11:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapter.rb:21:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json.rb:119:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/serializer/multi_json.rb:24:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:259:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:76:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2016-03-09T10:21:52.921000+0100", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
{:timestamp=>"2016-03-09T10:24:04.139000+0100", :message=>"execution expired", :class=>"MultiJson::ParseError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.8/lib/jrjackson/jrjackson.rb:87:in `is_time_string?'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.8/lib/jrjackson/jrjackson.rb:85:in `is_time_string?'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.8/lib/jrjackson/jrjackson.rb:34:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapters/jr_jackson.rb:11:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapter.rb:21:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json.rb:119:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/serializer/multi_json.rb:24:in `load'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:259:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:76:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2016-03-09T10:26:25.129000+0100", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-03-09T11:33:57.782000+0100", :message=>"Beats input: unhandled exception", :exception=>java.lang.OutOfMemoryError: Java heap space, :backtrace=>[], :level=>:error}
{:timestamp=>"2016-03-09T11:34:16.269000+0100", :message=>"Exception flushing buffer at interval!", :error=>"Java heap space", :class=>"Java::JavaLang::OutOfMemoryError", :level=>:warn}
{:timestamp=>"2016-03-09T11:33:16.326000+0100", :message=>"Beats input: unhandled exception", :exception=>java.lang.OutOfMemoryError: Java heap space: failed reallocation of scalar replaced objects, :backtrace=>[], :level=>:error}
{:timestamp=>"2016-03-09T11:35:07.205000+0100", :message=>"Connection pool shut down", :class=>"Manticore::ClientStoppedException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:76:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
_Sorry, i wasnt sure how to post that one_
Logstash.stout
tail -n 40 logstash.stdout
[Full GC (Allocation Failure) 4160256K->4160254K(4160256K), 17,9906100 secs]
[Full GC (Allocation Failure) 4160255K->4160242K(4160256K), 18,0344218 secs]
[GC (CMS Initial Mark) 4160255K(4160256K), 0,5704511 secs]
[Full GC (Allocation Failure) 4160256K->4160237K(4160256K), 17,8113709 secs]
[Full GC (Allocation Failure) 4160255K->4160245K(4160256K), 17,9895632 secs]
[GC (CMS Initial Mark) 4160246K(4160256K), 0,5730315 secs]
[Full GC (Allocation Failure) 4160253K->4160246K(4160256K), 17,8378445 secs]
[Full GC (Allocation Failure) 4160250K->4160250K(4160256K), 17,9981010 secs]
[Full GC (Allocation Failure) 4160250K->4160250K(4160256K), 17,9900151 secs]
[GC (CMS Initial Mark) 4160255K(4160256K), 0,5718444 secs]
[Full GC (Allocation Failure) 4160255K->4160245K(4160256K), 17,8228866 secs]
[Full GC (Allocation Failure) 4160254K->4160244K(4160256K), 17,9894688 secs]
[GC (CMS Initial Mark) 4160249K(4160256K), 0,5709069 secs]
[Full GC (Allocation Failure) 4160255K->4160243K(4160256K), 17,9990528 secs]
[Full GC (Allocation Failure) 4160255K->4160249K(4160256K), 18,0026073 secs]
[GC (CMS Initial Mark) 4160249K(4160256K), 0,5705746 secs]
[Full GC (Allocation Failure) 4160256K->4160251K(4160256K), 17,8352550 secs]
[Full GC (Allocation Failure) 4160255K->4160242K(4160256K), 18,0327543 secs]
[GC (CMS Initial Mark) 4160242K(4160256K), 0,5728894 secs]
[Full GC (Allocation Failure) 4160252K->4160246K(4160256K), 17,8223284 secs]
[Full GC (Allocation Failure) 4160250K->4160250K(4160256K), 18,0158114 secs]
[Full GC (Allocation Failure) 4160250K->4160250K(4160256K), 17,9888721 secs]
[GC (CMS Initial Mark) 4160252K(4160256K), 0,5733979 secs]
[Full GC (Allocation Failure) 4160255K->4160244K(4160256K), 20,5187668 secs]
[Full GC (Allocation Failure) 4160255K->4160252K(4160256K), 18,0412422 secs]
[Full GC (Allocation Failure) 4160255K->4160255K(4160256K), 17,8693451 secs]
[Full GC (Allocation Failure) 4160255K->4160243K(4160256K), 17,9897432 secs]
[Full GC (Allocation Failure) 4160254K->4160223K(4160256K), 17,8899346 secs]
[Full GC (Allocation Failure) 4160255K->4160224K(4160256K), 17,8574930 secs]
[Full GC (Allocation Failure) 4160255K->4160230K(4160256K), 17,9998782 secs]
[Full GC (Allocation Failure) 4160255K->4160087K(4160256K), 20,2106704 secs]
[GC (CMS Initial Mark) 4160087K(4160256K), 0,5722681 secs]
[Full GC (Allocation Failure) 4160255K->4160141K(4160256K), 20,2961062 secs]
[Full GC (Allocation Failure) 4160255K->4160212K(4160256K), 20,6439829 secs]
[Full GC (Allocation Failure) 4160255K->4160163K(4160256K), 20,7753447 secs]
[Full GC (Allocation Failure) 4160255K->4156962K(4160256K), 17,8851368 secs]
[GC (CMS Initial Mark) 4156963K(4160256K), 0,5701982 secs]
[Full GC (Allocation Failure) 4160255K->4133779K(4160256K), 20,5109019 secs]
[Full GC (System.gc()) 4140231K->4130294K(4160256K), 20,0610806 secs]
[GC (CMS Initial Mark) 4130295K(4160256K), 0,5713511 secs]
Those are all the Logs regarding logstash.
Here are all the configs, cated together
root@Logserver:/etc/logstash/conf.d# cat *.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
tags => ["beat"]
}
}
input{
file {
path => "/var/log/fw1.log"
tags => ["fw1"]
}
}
input {
beats {
port => 5045
ssl => false
tags => ["beat"]
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
locale => "en"
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
filter {
if "fw1" in [tags] {
kv{
field_split => "|"
remove_field => ["message"]
}
}
}
output {
if "fw1" in [tags]{
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "fw1-%{+YYYY.MM.dd}"
document_type => "fw1-log"
}
}
if "beat" in [tags] {
if "topbeat" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "topbeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
else if "winlogbeat" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "winlogbeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
else{
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}
}
output {
if "beat" in [tags] {
if "topbeat" in [tags] {
file{
path => "/var/log/topbeat.log"
}
}
else if "winlogbeat" in [tags] {
file{
path => "/var/log/winlogbeat.log"
}
}
else{
file{
path => "/var/log/filebeat.log"
}
}
}
}
I have an heap dump but it is to big to upload. If you need it, i can post some Screenshots of the Eclipse Memory Analyzer.
I think, the bug might be in the Elasticsearch Output Pluging, since when i disable it, Logstash want crash!
Is there anything else i can provide to help find the Bug?
Thank you for your help
@humpalum hope you don't mind, I edited your comment just to wrap the log files in code blocks.
I'll check it out. which version of logstash is this?
Thanks
Its logstash 2.2.2

Maybe those graphs help somehow.
At 13:00 i started logstash again.
Java seems to be both, logstash and elasticsearch
I also have logstash 2.2.2 running on Ubuntu 14.04, java 8 with one winlogbeat client logging. I have the same problem. Logstash fails after a period of time with an OOM error. I am currently watching the system monitor and the java process for logstash memory usage is just going up and up, about 60MB an hour...
I run logshat 2.2.2 and logstash-input-lumberjack (2.0.5) plugin and have only 1 source of logs so far (1 vhost in apache) and getting OOM error as well. Got it as well before setup to 1GB and after OOM i increased to 2GB, got OOM as well after week.
Is there anything else we can provide to help fixing the bug?
As i said, my guess is , that its a Problem with elasticsearch output.
@monsoft @jkjepson Do you guys also have an Elasticsearch Output?
Could it be an problem with Elasticsearch cant index something, logstash recognizing this and duns out of Memory after some time?
I am experiencing the same issue on my two Logstash instances as well, both of which have elasticsearch output. Added -w flag now and will gather what I can from the logs. I will see if I can match the ES logs with Logstash at the time of crash next time it goes down.
A heap dump would be very useful here. I'm currently trying to replicate this but haven't been succesful thus far.
My heapdump is 1.7gb.
Any preferences where to upload it?
can you try uploading to https://zi2q7c.s.cld.pt ? click on "UPLOAD DE FICHEIROS" or drag and drop
@humpalum thank you! this is extremely helpful!
apparently there are thousands of duplicate objects of HttpClient/Manticore, which is pointing out that sniffing (fetching current node list from the cluster + updating connections) is leaking objects.
I'm doing some further investigation.
Glad i can help.
Tell me when i can provide further information!
@humpalum can you post the output section of your config? which settings are you using in es output?
Output section is already in my first Post.
:+1:
Here is my output section:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
You have sniffing enabled in the output, please find my issue, looks like Sniffing causes memory leak.
https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/392
closing this in favor of https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/392
Ups, yes I have sniffing enabled as well in my output configuration. Going to switch it off and will see.
Many Thanks for help !!!
For anyone reading this, it has been fixed in plugin version 2.5.3.
To install this on your LS 2.2:
bin/plugin install --version 2.5.3 logstash-output-elasticsearch
We'll be releasing LS 2.3 soon with this fix included
Hi,
I am trying to upload files of about 13 GB into elastic search using logstash 5
But I keep getting Out of Memory error.
I have tried incerasing the LS_HEAPSIZE, but to no avail.
Can someone please help ??
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /home/geri/logstash-5.1.1/logs which is now configured via log4j2.properties
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid18194.hprof ...
@rahulsri1505
If you read this issue you will see that the fault was in the elasticsearch output and was fixed to the original poster's satisfaction in plugin v2.5.3
As you are having issues with LS 5 it is as likely as not you are experiencing a different problem.
Please open a new issue. The 'new issue template' instructs you to post details - please give us as much content as you can, it will help us to help you.
@guyboertje
Thanks for the quick response !
I have opened a new issue #6460 for the same
Gentlemen, i have started to see an OOM error in logstash 6.x
Any advise ?
ory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,536][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,537][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,537][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,537][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
[2018-04-02T16:14:47,537][INFO ][org.logstash.beats.BeatsHandler] [local: 10.16.11.222:5044, remote: 10.16.11.67:42102] Handling exception: failed to allocate 83886080 byte(s) of direct memory (used: 4201761716, max: 4277534720)
@sanky186
Please try to upgrade to the latest beats input:
bin/logstash-plugin update logstash-input-beats
@jakelandis Excellent suggestion, now the logstash runs for longer times. But still terminates with an out of memory exception. Inspite of me assigning 6GB of max JVM.
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
[2018-04-06T12:37:14,849][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 5326925084, max: 5333843968)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
@sanky186 - I would suggest, from the beats client, to reduce pipelining and drop the batch size , it sounds like the beats client may be overloading the Logstash server.