Elasticsearch version (bin/elasticsearch --version):
Version: 7.6.2, Build: default/tar/ef48eb35cf30adf4db14086e8aabd07ef6fb113f/2020-03-26T06:34:37.794943Z, JVM: 13.0.2
Plugins installed: []
Nothing
JVM version (java -version):
openjdk version "13.0.2" 2020-01-14
OpenJDK Runtime Environment AdoptOpenJDK (build 13.0.2+8)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 13.0.2+8, mixed mode, sharing)
OS version (uname -a if on a Unix-like system):
Linux 4.14.104-95.84.amzn2.x86_64 #1 SMP Sat Mar 2 00:40:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
Steps to reproduce:
search API with scoll, works
curl -u elastic -XGET -H 'Content-Type: application/json' "http://10.194.39.75:10250/.security/_search?scroll=1m&pretty" -d '{"size":1000} '
Enter host password for user 'elastic':
{
"error" : {
"root_cause" : [
{
"type" : "exception",
"reason" : "Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting."
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : ".security-7",
"node" : "OZWJ69LCRf6yCrIQ4GA1Bg",
"reason" : {
"type" : "exception",
"reason" : "Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting."
}
}
]
},
"status" : 500
}
search API without scoll, works
curl -u elastic -XGET -H 'Content-Type: application/json' "http://10.194.39.75:10250/.security/_search?pretty" -d '{"size":1000} '
Enter host password for user 'elastic':
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 43,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : ".security-7",
"_type" : "_doc",
"_id" : "application-privilege_kibana-.kibana:all",
"_score" : 1.0,
...
I also updated this issue on discuss.elastic.co
https://discuss.elastic.co/t/trying-to-create-too-many-scroll-contexts-for-security-index/243188
It worked well, before.
current scoll size is enough.
because of this issue, I cannot login to kibana.
Increasing search.max_open_scroll_context did not work.
Scolling other indices except .security work well.
Provide logs (if relevant):
kibana log
{"type":"log","@timestamp":"2020-07-30T03:45:28Z","tags":["debug","elasticsearch","security","query"],"pid":31854,"message":"500\nGET /_security/privilege/kibana-.kibana\n"}
{"type":"log","@timestamp":"2020-07-30T03:45:28Z","tags":["error","plugins","security","authorization"],"pid":31854,"message":"Error registering Kibana Privileges with Elasticsearch for kibana-.kibana: [exception] Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting."}
* elasticsearch log*
org.elasticsearch.transport.RemoteTransportException: [es-3][10.194.37.252:10350][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.ElasticsearchException: Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting.
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:549) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:351) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.search.SearchService.lambda$executeQueryPhase$1(SearchService.java:343) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListener.lambda$map$2(ActionListener.java:146) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:58) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.2.jar:7.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
[2020-07-31T09:30:34,405][DEBUG][o.e.a.s.TransportSearchAction] [es-1] All shards failed for phase: [query]
org.elasticsearch.ElasticsearchException: Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting.
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:549) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:351) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.search.SearchService.lambda$executeQueryPhase$1(SearchService.java:343) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListener.lambda$map$2(ActionListener.java:146) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:58) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.2.jar:7.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
[2020-07-31T09:30:34,405][WARN ][r.suppressed ] [es-1] path: /.kibana_task_manager/_update_by_query, params: {ignore_unavailable=true, refresh=true, conflicts=proceed, index=.kibana_task_manager, max_docs=10}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:545) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:306) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:574) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:386) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.access$200(AbstractSearchAsyncAction.java:66) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:242) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.SearchExecutionStatsCollector.onFailure(SearchExecutionStatsCollector.java:73) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.search.SearchTransportService$ConnectionCountingHandler.handleException(SearchTransportService.java:423) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1130) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:244) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.InboundHandler.handleException(InboundHandler.java:242) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.InboundHandler.handlerResponseError(InboundHandler.java:234) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:137) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:103) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:667) [elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) [transport-netty4-client-7.6.2.jar:7.6.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1478) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1227) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:503) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) [netty-common-4.1.43.Final.jar:4.1.43.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.43.Final.jar:4.1.43.Final]
at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.elasticsearch.ElasticsearchException: Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting.
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:549) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:351) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.search.SearchService.lambda$executeQueryPhase$1(SearchService.java:343) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListener.lambda$map$2(ActionListener.java:146) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:58) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.2.jar:7.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:830) ~[?:?]
I don't think there's any problem here other than the one that the error message refers to:
Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting.
Something on your cluster is creating a large number of scroll contexts that aren't being cleared up. There's nothing here that suggests that this is specific to the .security index, or that the cluster is working in any way other than how it is intended to work.
You may need to clear some/all of your scroll contexts.
Since this seems to be a user question and not a bug, I'm going to ask that we restrict the conversation to discuss. We use GitHub for tracking bugs and feature requests only, and this doesn't seem to fit that category.
I hope you don't mind if we close this issue, and continue the conversation on the forums instead. If that conversation uncovers some new information that demonstrates a real bug, then we can reopen this issue.
I don't think there's any problem here other than the one that the error message refers to:
Trying to create too many scroll contexts. Must be less than or equal to: [1024]. This limit can be set by changing the [search.max_open_scroll_context] setting.
Something on your cluster is creating a large number of scroll contexts that aren't being cleared up. There's nothing here that suggests that this is specific to the
.securityindex, or that the cluster is working in any way other than how it is intended to work.
You may need to clear some/all of your scroll contexts.Since this seems to be a user question and not a bug, I'm going to ask that we restrict the conversation to discuss. We use GitHub for tracking bugs and feature requests only, and this doesn't seem to fit that category.
I hope you don't mind if we close this issue, and continue the conversation on the forums instead. If that conversation uncovers some new information that demonstrates a real bug, then we can reopen this issue.
@tvernum Thanks for the reply.
I cleaned up all scroll context already before I did this job.
That's why I report this as a issue here.
I assume this issue related with kibana's security.
If you need more information about this,
please let me know.
I have the same problem。
max_open_scroll_context An error is reported when the context is changed to a large value。
log
{"statusCode":500,"error":"Internal Server Error","message":"[exception] Trying to create too many scroll contexts. Must be less than or equal to: [100000000]. This limit can be set by changing the [search.max_open_scroll_context] setting."}
I have the same problem。
max_open_scroll_context An error is reported when the context is changed to a large value。log
{"statusCode":500,"error":"Internal Server Error","message":"[exception] Trying to create too many scroll contexts. Must be less than or equal to: [100000000]. This limit can be set by changing the [search.max_open_scroll_context] setting."}
This is a bug. Openscrollcontexts is used to control the maximum scroll context. But by accessing the "_nodes/ data:true/stats/indices/search" get open_context does not reach the limit, which can be modified“ search.max_open_scroll_context "to the maximum value of int.
@wushuaiwangyin I just upgrade ES cluster from v7.6.2 to v7.8.1 and java high level rest client from v7.4 to v7.8.1. Until now, the problem has not been reproduced. In my opinion, this is a bug. There were several bug fixes related to the scroll context during the upgrade to v7.8.1. These could be fixed on this bug, too.
@wushuaiwangyin I just upgrade ES cluster from v7.6.2 to v7.8.1 and java high level rest client from v7.4 to v7.8.1. Until now, the problem has not been reproduced. In my opinion, this is a bug. There were several bug fixes related to the scroll context during the upgrade to v7.8.1. These could be fixed on this bug, too.
Where is the commit? This bug is sporadic.At present, only one cluster of multiple clusters has found this problem.
@wushuaiwangyin I just upgrade ES cluster from v7.6.2 to v7.8.1 and java high level rest client from v7.4 to v7.8.1. Until now, the problem has not been reproduced. In my opinion, this is a bug. There were several bug fixes related to the scroll context during the upgrade to v7.8.1. These could be fixed on this bug, too.
Where is the commit? This bug is sporadic.At present, only one cluster of multiple clusters has found this problem.
You'd better check the git history of "server/src/main/java/org/elasticsearch/search/SearchService.java"
Most helpful comment
@tvernum Thanks for the reply.
I cleaned up all scroll context already before I did this job.
That's why I report this as a issue here.
I assume this issue related with kibana's security.
If you need more information about this,
please let me know.