My client node is suddenly crashing with this error:
[2017-01-02T20:40:22,663][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:141)
at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:76)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:78)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1301)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:914)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:99)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:140)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:536)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:490)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
at java.lang.Thread.run(Thread.java:745)
What triggers this? I'm thinking it might be due to too many concurrent connections, but the error tells me nothing?
There should be a caused by exception below this. Can you please share it here?
It would look something like:
\[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2},\d{3}\]\[ERROR\]\[o\.e\.b\.ElasticsearchUncaughtExceptionHandler\] \[\] fatal error in thread \[Thread-\d+\], exiting
with another exception stack trace below it.
[2017-01-02T21:38:53,488][WARN ][o.e.m.j.JvmGcMonitorService] [elasticclient] [gc][2285] overhead, spent [3.5s] collecting in the last [3.8s]
[2017-01-02T21:38:49,758][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elasticclient] fatal error in thread [Thread-5], exiting
java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656) ~[?:?]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237) ~[?:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221) ~[?:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141) ~[?:?]
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:247) ~[?:?]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160) ~[?:?]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151) ~[?:?]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133) ~[?:?]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:536) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:490) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) ~[?:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
[2017-01-02T21:38:49,689][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elasticclient] fatal error in thread [Thread-8], exiting
java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656) ~[?:?]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237) ~[?:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221) ~[?:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141) ~[?:?]
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:247) ~[?:?]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160) ~[?:?]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151) ~[?:?]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133) ~[?:?]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:536) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:490) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) ~[?:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
This started happening after I increased Logstash pipeline.workers to 164 and pipeline.output.workers to 2, and pipeline.batch.size to 512. I can't find the counter to watch. The client is running node.data: false and node.master: false, heap is 5g atm, server has 10g.
node stats (after restart though):
{
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
"cluster_name": "elasticsearch",
"nodes": {
"Dvjln3ldSn2cwW4K9Zz1Kw": {
"timestamp": 1483426687178,
"name": "elasticclient",
"transport_address": "xxxxx:9300",
"host": "xxxxx",
"ip": "xxxxxx:9300",
"roles": [
"ingest"
],
"indices": {
"docs": {
"count": 0,
"deleted": 0
},
"store": {
"size_in_bytes": 0,
"throttle_time_in_millis": 0
},
"indexing": {
"index_total": 0,
"index_time_in_millis": 0,
"index_current": 0,
"index_failed": 0,
"delete_total": 0,
"delete_time_in_millis": 0,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 0,
"query_time_in_millis": 0,
"query_current": 0,
"fetch_total": 0,
"fetch_time_in_millis": 0,
"fetch_current": 0,
"scroll_total": 0,
"scroll_time_in_millis": 0,
"scroll_current": 0,
"suggest_total": 0,
"suggest_time_in_millis": 0,
"suggest_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 0,
"total_time_in_millis": 0,
"total_docs": 0,
"total_size_in_bytes": 0,
"total_stopped_time_in_millis": 0,
"total_throttled_time_in_millis": 0,
"total_auto_throttle_in_bytes": 0
},
"refresh": {
"total": 0,
"total_time_in_millis": 0
},
"flush": {
"total": 0,
"total_time_in_millis": 0
},
"warmer": {
"current": 0,
"total": 0,
"total_time_in_millis": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"total_count": 0,
"hit_count": 0,
"miss_count": 0,
"cache_size": 0,
"cache_count": 0,
"evictions": 0
},
"fielddata": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 0,
"memory_in_bytes": 0,
"terms_memory_in_bytes": 0,
"stored_fields_memory_in_bytes": 0,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 0,
"points_memory_in_bytes": 0,
"doc_values_memory_in_bytes": 0,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0,
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp": -9223372036854776000,
"file_sizes": {
}
},
"translog": {
"operations": 0,
"size_in_bytes": 0
},
"request_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1483426687062,
"cpu": {
"percent": 59,
"load_average": {
"1m": 1.52,
"5m": 1.41,
"15m": 0.72
}
},
"mem": {
"total_in_bytes": 10368720896,
"free_in_bytes": 3572068352,
"used_in_bytes": 6796652544,
"free_percent": 34,
"used_percent": 66
},
"swap": {
"total_in_bytes": 4294963200,
"free_in_bytes": 4294963200,
"used_in_bytes": 0
}
},
"process": {
"timestamp": 1483426687062,
"open_file_descriptors": 610,
"max_file_descriptors": 65536,
"cpu": {
"percent": 55,
"total_in_millis": 929210
},
"mem": {
"total_virtual_in_bytes": 9253572608
}
},
"jvm": {
"timestamp": 1483426687062,
"uptime_in_millis": 453992,
"mem": {
"heap_used_in_bytes": 2061345056,
"heap_used_percent": 38,
"heap_committed_in_bytes": 5333843968,
"heap_max_in_bytes": 5333843968,
"non_heap_used_in_bytes": 83551800,
"non_heap_committed_in_bytes": 88379392,
"pools": {
"young": {
"used_in_bytes": 218310616,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 279183360,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 34865152,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 34865152,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 1808169288,
"max_in_bytes": 5019795456,
"peak_used_in_bytes": 4707353216,
"peak_max_in_bytes": 5019795456
}
}
},
"threads": {
"count": 48,
"peak_count": 49
},
"gc": {
"collectors": {
"young": {
"collection_count": 206,
"collection_time_in_millis": 50089
},
"old": {
"collection_count": 17,
"collection_time_in_millis": 3922
}
}
},
"buffer_pools": {
"direct": {
"count": 38,
"used_in_bytes": 228089864,
"total_capacity_in_bytes": 228089863
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
},
"classes": {
"current_loaded_count": 9685,
"total_loaded_count": 9689,
"total_unloaded_count": 4
}
},
"thread_pool": {
"bulk": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"fetch_shard_started": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"fetch_shard_store": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"flush": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"force_merge": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"generic": {
"threads": 4,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 4,
"completed": 486
},
"get": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"index": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"listener": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"management": {
"threads": 3,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 3,
"completed": 584
},
"refresh": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"search": {
"threads": 7,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 7,
"completed": 108
},
"snapshot": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"warmer": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
}
},
"fs": {
"timestamp": 1483426687063,
"total": {
"total_in_bytes": 90230251520,
"free_in_bytes": 83245780992,
"available_in_bytes": 78655537152,
"spins": "true"
},
"data": [
{
"path": "/var/lib/elasticsearch/nodes/0",
"mount": "/var (/dev/mapper/vg-var)",
"type": "ext4",
"total_in_bytes": 90230251520,
"free_in_bytes": 83245780992,
"available_in_bytes": 78655537152,
"spins": "true"
}
],
"io_stats": {
"devices": [
{
"device_name": "dm-4",
"operations": 298,
"read_operations": 0,
"write_operations": 298,
"read_kilobytes": 0,
"write_kilobytes": 1192
}
],
"total": {
"operations": 298,
"read_operations": 0,
"write_operations": 298,
"read_kilobytes": 0,
"write_kilobytes": 1192
}
}
},
"transport": {
"server_open": 112,
"rx_count": 142865,
"rx_size_in_bytes": 5043248958,
"tx_count": 142888,
"tx_size_in_bytes": 1598377422
},
"http": {
"current_open": 224,
"total_opened": 356
},
"breakers": {
"request": {
"limit_size_in_bytes": 3200306380,
"limit_size": "2.9gb",
"estimated_size_in_bytes": 4440736,
"estimated_size": "4.2mb",
"overhead": 1,
"tripped": 0
},
"fielddata": {
"limit_size_in_bytes": 3200306380,
"limit_size": "2.9gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"in_flight_requests": {
"limit_size_in_bytes": 5333843968,
"limit_size": "4.9gb",
"estimated_size_in_bytes": 2,
"estimated_size": "2b",
"overhead": 1,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 3733690777,
"limit_size": "3.4gb",
"estimated_size_in_bytes": 4440738,
"estimated_size": "4.2mb",
"overhead": 1,
"tripped": 0
}
},
"script": {
"compilations": 0,
"cache_evictions": 0
},
"discovery": {
"cluster_state_queue": {
"total": 0,
"pending": 0,
"committed": 0
}
},
"ingest": {
"total": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
},
"pipelines": {
}
}
}
}
}
I need to add on to this. We have a single node instance for our development needs. Heap size is 2.5 GB, and the server is crashing with the same OOM heap size error from the network layer every 10-15 minutes.
It makes no sense, I am only running 3x bulk indexing processes. No searches are being executed whatsoever. fielddata in memory is exactly 0. There is no reason for our 200 MB of data to require 2.5 GB of memory - especially when no searches are being performed, and thus fielddata is not being loaded.
Something is really fishy with version 5.1.1
Can either/both of you share a heap dump from when it OOMs? (Be careful since it might contain sensitive data. If needed, we can find a mechanism to share it privately.)
And can you also try running with -Dio.netty.recycler.maxCapacityPerThread=0 and let me know if the problem persists (grab a heap dump before changing this setting though)? Note that this could negatively impact performance.
@jasontedor can you please confirm whether the option should be io.netty.recycler.maxCapacityPerThread or io.netty.recycler.maxCapacity.default?
It should be the one I stated (-Dio.netty.recycler.maxCapacityPerThread=0), I don't think the latter does anything.
Closed by #22452
-Dio.netty.allocator.type=unpooled does not help either.
Having the same issues with 5.2.0, even though heap size is 6g and available memory 12gb. I pass in the following
ES_JAVA_OPTS = -Xms6g -Xmx6g -Dio.netty.recycler.maxCapacityPerThread=0
The full log from starting the node to failing is at https://gist.github.com/JorritSalverda/eef328f1cf130f360409854394c321b6
The highest memory I've seen is 6.55GB, so I don't think it comes close to the 12GB limit at all.
As I mentioned in a previous comment, disabling the recycler is not expected to eliminate all causes of out of memory errors on the network layer. If you startup Elasticsearch with a 256m heap (and increase http.max_content_length) and send it a 512m request, you're going to have a bad time. This has nothing to do with whether or not the recycler is enabled. Disabling the recycler dealt with a specific problem but if you're still seeing heaping, then you have to take a heap dump and see what is consuming the space and adjust accordingly.
Okay, going to test with a bigger heap size. Is there any way to prevent a client from failing if too much data is being sent? Some way to throttle the sender?
I guess maybe my bulk requests are too large as well, but it's a hard thing to know the limits of?
Is this problem solved now? The above method does not work!!!
I get the same error. Running 5.2.0 on ubuntu
Have these settings for my bulkProcessor
.setBulkActions(10000)
.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB))
.setFlushInterval(TimeValue.timeValueSeconds(5))
.setConcurrentRequests(1)
.setBackoffPolicy(BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3))
Can't really find out what is causing the following
2017-04-24 14:27:28,064 [elasticsearch[_client_][transport_client_boss][T#1]] ERROR o.e.t.n.Netty4Utils Korrelasjonsid= fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:140)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:83)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:851)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
2017-04-24 14:27:30,019 [elasticsearch[_client_][generic][T#1]] INFO o.e.c.t.TransportClientNodesService Korrelasjonsid= failed to get node info for {#transport#-1}{Ayn44paGRqmiq27m_zwKVQ}{tsl0sofus-at-oppdrag01
}{10.1.5.152:12500}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][10.1.5.152:12500][cluster:monitor/nodes/liveness] request_id [248] timed out after [47400ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:908)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017-04-24 14:27:30,020 [elasticsearch[_client_][transport_client_boss][T#1]] WARN o.e.t.n.Netty4Transport Korrelasjonsid= exception caught on transport layer [[id: 0xe86b4f6e, L:/10.1.5.152:52010 - R:tsl0sofus-
at-oppdrag02/10.1.13.77:12500]], closing connection
org.elasticsearch.ElasticsearchException: java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.elasticsearch.transport.netty4.Netty4Transport.exceptionCaught(Netty4Transport.java:332)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:84)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:851)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at org.apache.lucene.util.CharsRef.toString(CharsRef.java:154)
at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:373)
at org.elasticsearch.index.Index.<init>(Index.java:63)
at org.elasticsearch.index.shard.ShardId.readFrom(ShardId.java:101)
at org.elasticsearch.index.shard.ShardId.readShardId(ShardId.java:95)
at org.elasticsearch.action.DocWriteResponse.readFrom(DocWriteResponse.java:208)
at org.elasticsearch.action.bulk.BulkItemResponse.readFrom(BulkItemResponse.java:307)
at org.elasticsearch.action.bulk.BulkItemResponse.readBulkItem(BulkItemResponse.java:296)
at org.elasticsearch.action.bulk.BulkResponse.readFrom(BulkResponse.java:128)
at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1372)
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1347)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280)
Still an issue in 5.4.0
@tomsommer Please see my previous comments on this subject.
Ive tried the above options with es 5.3.1 ... Im still getting the same issue, when I run our test cases 6 times, every 6th time I get the same error, its a single node cluster with the below settings...
cluster.name: test
node.name: test-0
node.master: true
node.data: true
path.data: /tmp/elasticsearch
path.logs: /tmp/eslogs
If I increase the java heap size it runs for a few more times and still crashes eventually...
12:49:31.022 INFO [PluginsService]: no modules loaded
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.transport.Netty3Plugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.transport.Netty4Plugin]
12:49:34.874 ERROR [Netty4Utils]: fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:140)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:83)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257)
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1301)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:914)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:99)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:140)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-26" java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141)
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:272)
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160)
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
12:49:34.938 WARN [Netty4Transport]: exception caught on transport layer [[id: 0x2e1e1733, L:/127.0.0.1:50952 - R:/127.0.0.1:9300]], closing connection
org.elasticsearch.ElasticsearchException: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.transport.netty4.Netty4Transport.exceptionCaught(Netty4Transport.java:321) [transport-netty4-client-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:84) [transport-netty4-client-5.3.1.jar:5.3.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1301) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:914) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:99) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:140) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:272) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
... 6 common frames omitted
12:49:35.433 ERROR [Netty4Utils]: fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:140)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:83)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257)
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1301)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:914)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:99)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:140)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-27" java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141)
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:272)
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160)
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
12:49:35.434 WARN [Netty4Transport]: exception caught on transport layer [[id: 0x2e1e1733, L:/127.0.0.1:50952 - R:/127.0.0.1:9300]], closing connection
org.elasticsearch.ElasticsearchException: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.transport.netty4.Netty4Transport.exceptionCaught(Netty4Transport.java:321) [transport-netty4-client-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:84) [transport-netty4-client-5.3.1.jar:5.3.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1301) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:914) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:99) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:140) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:272) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133) ~[netty-buffer-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
... 6 common frames omitted
12:49:35.446 INFO [TransportClientNodesService]: failed to connect to node [{#transport#-1}{ZszR19SXT763_lMtxtRvcg}{127.0.0.1}{127.0.0.1:9300}], removed from nodes list
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:9300] general node connection failure
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:519) ~[elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:460) ~[elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:314) ~[elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:408) ~[elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:354) [elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:195) [elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:322) [elasticsearch-5.3.1.jar:5.3.1]
at vps.core.elastic.ElasticClient$.$anonfun$connect$1(ElasticClient.scala:225) [classes/:na]
at scala.collection.Iterator.foreach(Iterator.scala:929) ~[scala-library-2.12.1.jar:1.0.0-M1]
at scala.collection.Iterator.foreach$(Iterator.scala:929) ~[scala-library-2.12.1.jar:1.0.0-M1]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1406) ~[scala-library-2.12.1.jar:1.0.0-M1]
at scala.collection.IterableLike.foreach(IterableLike.scala:71) ~[scala-library-2.12.1.jar:1.0.0-M1]
at scala.collection.IterableLike.foreach$(IterableLike.scala:70) ~[scala-library-2.12.1.jar:1.0.0-M1]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[scala-library-2.12.1.jar:1.0.0-M1]
at vps.core.elastic.ElasticClient$.connect(ElasticClient.scala:224) [classes/:na]
at vps.core.elastic.ElasticManagement.connect(ElasticManagement.scala:35) ~[classes/:na]
at vps.core.elastic.ElasticManagement.
at vps.core.elastic.ElasticManagement$.apply(ElasticManagement.scala:307) ~[classes/:na]
at vps.core.setup.TestCasePreSetup$.runPreSetup(TestCasePreSetup.scala:11) ~[test-classes/:na]
at vps.core.setup.TestCasePreSetup.runPreSetup(TestCasePreSetup.scala) ~[test-classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
at $f272e0d5d38461cd613e$$anonfun$core$5$$anonfun$apply$3.apply(build.sbt:61) ~[na:na]
at $f272e0d5d38461cd613e$$anonfun$core$5$$anonfun$apply$3.apply(build.sbt:60) ~[na:na]
at sbt.Tests$$anonfun$sbt$Tests$$partApp$1$1$$anonfun$apply$1.apply$mcV$sp(Tests.scala:174) ~[actions-0.13.13.jar:0.13.13]
at sbt.Tests$$anonfun$sbt$Tests$$fj$1$1.apply(Tests.scala:173) ~[actions-0.13.13.jar:0.13.13]
at sbt.Tests$$anonfun$sbt$Tests$$fj$1$1.apply(Tests.scala:173) ~[actions-0.13.13.jar:0.13.13]
at sbt.std.TaskExtra$$anon$1$$anonfun$fork$1$$anonfun$apply$1.apply(TaskExtra.scala:89) ~[task-system-0.13.13.jar:0.13.13]
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44) ~[task-system-0.13.13.jar:0.13.13]
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44) ~[task-system-0.13.13.jar:0.13.13]
at sbt.std.Transform$$anon$4.work(System.scala:63) ~[task-system-0.13.13.jar:0.13.13]
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228) ~[tasks-0.13.13.jar:0.13.13]
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228) ~[tasks-0.13.13.jar:0.13.13]
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17) ~[control-0.13.13.jar:0.13.13]
at sbt.Execute.work(Execute.scala:237) ~[tasks-0.13.13.jar:0.13.13]
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228) ~[tasks-0.13.13.jar:0.13.13]
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228) ~[tasks-0.13.13.jar:0.13.13]
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159) ~[tasks-0.13.13.jar:0.13.13]
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28) ~[tasks-0.13.13.jar:0.13.13]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_91]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]
Caused by: java.lang.IllegalStateException: handshake failed
at org.elasticsearch.transport.TcpTransport.executeHandshake(TcpTransport.java:1568) ~[elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:502) ~[elasticsearch-5.3.1.jar:5.3.1]
... 46 common frames omitted
Caused by: org.elasticsearch.transport.TransportException: connection reset
at org.elasticsearch.transport.TcpTransport.onChannelClosed(TcpTransport.java:1610) ~[elasticsearch-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.netty4.Netty4Transport.access$100(Netty4Transport.java:94) ~[transport-netty4-client-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.netty4.Netty4Transport$ChannelCloseListener.operationComplete(Netty4Transport.java:401) ~[transport-netty4-client-5.3.1.jar:5.3.1]
at org.elasticsearch.transport.netty4.Netty4Transport$ChannelCloseListener.operationComplete(Netty4Transport.java:391) ~[transport-netty4-client-5.3.1.jar:5.3.1]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1058) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:686) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:664) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:607) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1276) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$1100(AbstractChannelHandlerContext.java:38) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext$13.run(AbstractChannelHandlerContext.java:614) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[netty-common-4.1.7.Final.jar:4.1.7.Final]
... 1 common frames omitted
last core/test:testOnly
[warn] javaOptions will be ignored, fork is set to false
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{ZszR19SXT763_lMtxtRvcg}{127.0.0.1}{127.0.0.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:344)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:242)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:404)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1237)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at vps.core.elastic.ElasticClient.indexExists(ElasticClient.scala:43)
at vps.core.elastic.ElasticClient.mappingExists(ElasticClient.scala:37)
at vps.core.elastic.ElasticManagement.$anonfun$schemaExists$1(ElasticManagement.scala:138)
at scala.runtime.java8.JFunction1$mcZI$sp.apply(JFunction1$mcZI$sp.java:12)
at scala.collection.Iterator.forall(Iterator.scala:941)
at scala.collection.Iterator.forall$(Iterator.scala:939)
at scala.collection.AbstractIterator.forall(Iterator.scala:1406)
at scala.collection.IterableLike.forall(IterableLike.scala:74)
at scala.collection.IterableLike.forall$(IterableLike.scala:73)
at scala.collection.AbstractIterable.forall(Iterable.scala:54)
at vps.core.elastic.ElasticManagement.schemaExists(ElasticManagement.scala:137)
at vps.core.setup.TestCasePreSetup$.runPreSetup(TestCasePreSetup.scala:12)
at vps.core.setup.TestCasePreSetup.runPreSetup(TestCasePreSetup.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at $f272e0d5d38461cd613e$$anonfun$core$5$$anonfun$apply$3.apply(build.sbt:61)
at $f272e0d5d38461cd613e$$anonfun$core$5$$anonfun$apply$3.apply(build.sbt:60)
at sbt.Tests$$anonfun$sbt$Tests$$partApp$1$1$$anonfun$apply$1.apply$mcV$sp(Tests.scala:174)
at sbt.Tests$$anonfun$sbt$Tests$$fj$1$1.apply(Tests.scala:173)
at sbt.Tests$$anonfun$sbt$Tests$$fj$1$1.apply(Tests.scala:173)
at sbt.std.TaskExtra$$anon$1$$anonfun$fork$1$$anonfun$apply$1.apply(TaskExtra.scala:89)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[error] (core/test:testOnly) java.lang.reflect.InvocationTargetException
@ninadvps Please see my previous comment on this subject.
@jasontedor I have, and tried the suggestions with no resolution yet. Am I missing something.
Still an issue in 5.4.2;
[2017-06-24T15:45:57,221][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)
at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:81)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at org.elasticsearch.http.netty4.cors.Netty4CorsHandler.channelRead(Netty4CorsHandler.java:76)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
[2017-06-24T15:45:57,240][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)
at org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport.exceptionCaught(SecurityNetty4HttpServerTransport.java:61)
at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:82)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at org.elasticsearch.http.netty4.cors.Netty4CorsHandler.channelRead(Netty4CorsHandler.java:76)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
I also see the same error as in the op but without any cause exception. Actually since we use more than one elastic search client connection we have A HELL OF A LOT trouble with elastic search client code. keeping one connection open but idle results in out of memory after some time and we get various other exception. The client code is very very instable and to what i've seen yet far to big and complext for what it actually does (sending some bulck json strings to a rest service).
We are currently thinking to remove it totally and submit our own json strings as the client code is for us basically impossible to get it stable working.
Still an issue in 5.5.3
And my bulk requests are very small
anyone solve this problem?
Still an issue in 5.6.4... any updates?
Getting the same error in 6.4.3 on RH 7.6.
I'm investigating this with a user trying to apply SSL. It may be the fact they are using a Wildcard certificate.
Is it still an issue on ES 7 version ?
Most helpful comment
I need to add on to this. We have a single node instance for our development needs. Heap size is 2.5 GB, and the server is crashing with the same OOM heap size error from the network layer every 10-15 minutes.
It makes no sense, I am only running 3x bulk indexing processes. No searches are being executed whatsoever. fielddata in memory is exactly 0. There is no reason for our 200 MB of data to require 2.5 GB of memory - especially when no searches are being performed, and thus fielddata is not being loaded.
Something is really fishy with version 5.1.1