Kibana version: 6.5.0 BC1
Elasticsearch version: 6.5.0 BC1
Server OS version: darwin_x86_64
Original install method (e.g. download page, yum, from source, etc.): from staging
Describe the bug: If you restart Kibana/Elasticsearch simultaneously Kibana crashes with a fatal error. if you start Kibana again it comes up without any issues.
Kibana logs:
log [21:38:00.609] [error][status][plugin:[email protected]] Status changed from red to red - all shards failed: [search_phase_execution_exception] all shards failed
error [21:38:00.623] [fatal][root] [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
at respond (/Users/bhavyarajumandya/Desktop/release_6.5.0_BC1/kibana-6.5.0-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:308:15)
at checkRespForFailure (/Users/bhavyarajumandya/Desktop/release_6.5.0_BC1/kibana-6.5.0-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:267:7)
at HttpConnector.<anonymous> (/Users/bhavyarajumandya/Desktop/release_6.5.0_BC1/kibana-6.5.0-darwin-x86_64/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)
at IncomingMessage.wrapper (/Users/bhavyarajumandya/Desktop/release_6.5.0_BC1/kibana-6.5.0-darwin-x86_64/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
Elasticsearch logs:
2018-11-01T17:38:00,581][WARN ][r.suppressed ] [fz4Hevp] path: /.kibana/doc/_count, params: {index=.kibana, type=doc}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:210) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:189) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.5.0.jar:6.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) [?:?]
at java.lang.Thread.run(Thread.java:844) [?:?]
[2018-11-01T17:38:00,604][WARN ][r.suppressed ] [fz4Hevp] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:224) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:203) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:97) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:61) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:143) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:126) [x-pack-security-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:395) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:487) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.rest.action.document.RestGetAction.lambda$prepareRequest$0(RestGetAction.java:81) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:97) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:72) [x-pack-security-6.5.0.jar:6.5.0]
at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:239) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:335) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:173) [elasticsearch-6.5.0.jar:6.5.0]
at org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:545) [transport-netty4-client-6.5.0.jar:6.5.0]
at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:137) [transport-netty4-client-6.5.0.jar:6.5.0]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68) [transport-netty4-client-6.5.0.jar:6.5.0]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-common-4.1.30.Final.jar:4.1.30.Final]
at java.lang.Thread.run(Thread.java:844) [?:?]
@tylersmalley / cc @LeeDr thanks!
@elastic/kibana-qa
@bhavyarm is this reproducible? I have not been able to reproduce by bouncing ES.
Can you elaborate on what you mean by restarting them simultaneously?
@tylersmalley I tested on BC3 just now and I couldn't. I meant - bin/elasticsearch and bin/kibana in two terminal windows without waiting for elasticsearch to come up. Thanks!
@tylersmalley it happened again when I was on ML UI? But unfortunately I cannot reproduce it consistently.
Oh, so so simultaneously starting the two instances - not restarting. Are you starting with ES data?
Doesn't happen the first time I start elasticsearch and Kibana. Happens on subsequent starts
The problem here is it's possible ES is up, but the .kibana
index's shards have not yet been allocated. Working on a fix to allow re-tries for checking the migration state.
I have opened a PR here: https://github.com/elastic/kibana/pull/25255
Hey guys, are there any news regarding this issue and its PR?
Currently using the official docker files on 6.6.0 version in a docker-compose and finding the same issue as @bhavyarm :(
@qmontal the PR was merged into 6.6.0 and should have fixed the issue. Since this one is closed, it's probably best to submit a new issue with all your details, logs, etc. You could reference this one if you think it's the same issue.
Hi @LeeDr!
Thank you for your fast reply, I was actually testing with both 6.5.2 and 6.6.0, but now I noticed that even if the error that I though that triggered that issue was happening randomly with both versions ([search_phase_execution_exception] all shards failed
), in 6.6.0 it no longer broke Kibana, so I guess it is indeed fixed.
Thanks for the confirmation!
I think I just saw this in Kibana 6.7.0:
kibana_1 | FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
kibana_1 |
esd_kibana-6.7.0 exited with code 1
Restarting Kibana cleared it
Just got this in K7
FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"graph-workspace\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.graph-works
pace\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\"
:{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\"
:{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.0.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[]
,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
I had a similar problem. It turns out that Kibana tried to start while the shards in the ES cluster weren't all assigned.
You can check that with:
http://localhost:9200/_cluster/health?pretty
http://localhost:9200/_cat/shards?v
Once all the shares were allocated in ES I was able to start up Kibana.
I suspect this isn't necessarily a bug in Kibana but rather just an exception thrown by Kibana as it can't query ES because the shards aren't assigned.
Most helpful comment
Hey guys, are there any news regarding this issue and its PR?
Currently using the official docker files on 6.6.0 version in a docker-compose and finding the same issue as @bhavyarm :(