The recovery steps at https://www.elastic.co/guide/en/kibana/6.5/release-notes-6.5.0.html for fixing the stuck Kibana issue need updated information to include the Kibana keystore and necessary privileges (all is not needed). Ideally, the precise steps needed to accomplish the workaround would be documented as well.
After upgrading from an older version of Kibana while using X-Pack security, if you get a permission error when you start Kibana for the first time, do the following steps to recover:
.kibana_1 and .kibana_2 indices that were createdall permission for the .tasks indexkibana_system role as well as the new role you just createdelasticsearch.username and elasticsearch.password in kibana.yml with the details from that new userThis will be fixed in a future bug fix release, at which time you can go back to using the built-in kibana user.
After upgrading from an older version of Kibana while using X-Pack security, if you get a permission error when you start Kibana for the first time, do the following steps to recover:
.kibana_1 and .kibana_2 indices that were createdcreate_index, create, and read permissions for the .tasks indexkibana_system role as well as the new role you just createdelasticsearch.username and elasticsearch.password in kibana.yml with the details from that new userelasticsearch.username and elasticsearch.password from the keystore using the kibana-keystore tool. Add these keys back to the keystore using the new user and password as values. This will be fixed in a future bug fix release, at which time you can go back to using the built-in kibana user.
@JalehD @CamiloSierraH @shanksy01 @tvernum
Seeing similar issues with the docker image docker.elastic.co/kibana/kibana-oss:6.5 without xpack.
{"type":"log","@timestamp":"2018-11-15T21:39:26Z","tags":["info","migrations"],"pid":20,"message":"Creating index .kibana_2."}
{"type":"log","@timestamp":"2018-11-15T21:39:27Z","tags":["info","migrations"],"pid":20,"message":"Reindexing .kibana to .kibana_1"}
{"type":"error","@timestamp":"2018-11-15T21:39:27Z","tags":["fatal","root"],"pid":20,"level":"fatal","error":{"message":"[index_not_found_exception] no such index, with { resource.type=\"index_expression\" & resource.id=\".tasks\" & index_uuid=\"_na_\" & index=\".tasks\" }","name":"Error","stack":"[index_not_found_exception] no such index, with { resource.type=\"index_expression\" & resource.id=\".tasks\" & index_uuid=\"_na_\" & index=\".tasks\" } :: {\"path\":\"/_tasks/avbYwTDkTwKkHJRqHrBlCg%3A414586\",\"query\":{},\"statusCode\":404,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"index_not_found_exception\\\",\\\"reason\\\":\\\"no such index\\\",\\\"resource.type\\\":\\\"index_expression\\\",\\\"resource.id\\\":\\\".tasks\\\",\\\"index_uuid\\\":\\\"_na_\\\",\\\"index\\\":\\\".tasks\\\"}],\\\"type\\\":\\\"resource_not_found_exception\\\",\\\"reason\\\":\\\"task [avbYwTDkTwKkHJRqHrBlCg:414586] isn't running and hasn't stored its results\\\",\\\"caused_by\\\":{\\\"type\\\":\\\"index_not_found_exception\\\",\\\"reason\\\":\\\"no such index\\\",\\\"resource.type\\\":\\\"index_expression\\\",\\\"resource.id\\\":\\\".tasks\\\",\\\"index_uuid\\\":\\\"_na_\\\",\\\"index\\\":\\\".tasks\\\"}},\\\"status\\\":404}\"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickCallback (internal/process/next_tick.js:180:9)"},"message":"[index_not_found_exception] no such index, with { resource.type=\"index_expression\" & resource.id=\".tasks\" & index_uuid=\"_na_\" & index=\".tasks\" }"}
FATAL [index_not_found_exception] no such index, with { resource.type="index_expression" & resource.id=".tasks" & index_uuid="_na_" & index=".tasks" } :: {"path":"/_tasks/avbYwTDkTwKkHJRqHrBlCg%3A414586","query":{},"statusCode":404,"response":"{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_expression\",\"resource.id\":\".tasks\",\"index_uuid\":\"_na_\",\"index\":\".tasks\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"task [avbYwTDkTwKkHJRqHrBlCg:414586] isn't running and hasn't stored its results\",\"caused_by\":{\"type\":\"index_not_found_exception\",\"reason\":\"no such index\",\"resource.type\":\"index_expression\",\"resource.id\":\".tasks\",\"index_uuid\":\"_na_\",\"index\":\".tasks\"}},\"status\":404}"}
This also seems to occur when xpack security isn't used. Since I didn't pay for security, I assumed it would be off by default (apparently not true). So in short I had xpack security unintentionally on during upgrade and the above happens. I tried to disable security but kibana still crashes on startup. So I seem to be in an unrecoverable state... Or is there a fix for this scenario too?
Logs:
log [16:07:53.302] [info][license][xpack] Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active
log [16:07:53.887] [info][migrations] Creating index .kibana_2.
log [16:07:54.302] [info][migrations] Reindexing .kibana to .kibana_1
error [16:07:54.994] [fatal][root] [resource_not_found_exception] task [k3s35GjtR7ySZkTMIiumgQ:958521] isn't running and hasn't stored its results :: {"path":"/_tasks/k3s35GjtR7ySZkTMIiumgQ%3A958521","query":{},"statusCode":404,"response":"{\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"task [k3s35GjtR7ySZkTMIiumgQ:958521] isn't running and hasn't stored its results\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"task [k3s35GjtR7ySZkTMIiumgQ:958521] isn't running and hasn't stored its results\"},\"status\":404}"}
at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)
at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)
at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)
at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
FATAL [resource_not_found_exception] task [k3s35GjtR7ySZkTMIiumgQ:958521] isn't running and hasn't stored its results :: {"path":"/_tasks/k3s35GjtR7ySZkTMIiumgQ%3A958521","query":{},"statusCode":404,"response":"{\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"task [k3s35GjtR7ySZkTMIiumgQ:958521] isn't running and hasn't stored its results\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"task [k3s35GjtR7ySZkTMIiumgQ:958521] isn't running and hasn't stored its results\"},\"status\":404}"}
@kribor I was able stop kibana, delete the indexes .kibana_1 and .kibana_2, and the roll back to a previous version of kibana.
For the scenario I described above with this happening in opensource kibana as well, the root cause seems to have been a read-only .kibana index. I removed read only from alll indices and the kibana starts fine..
curl -X PUT http://localhost:9200/_all/_settings -H 'Content-Type: application/json' -d'{ "index.blocks.read_only_allow_delete" : false } }'
This also seems to occur when xpack security isn't used.
This particular problem cannot happen with security off - it's specifically caused by a mismatch between the required permissions and the actual permisisons.
There may be other things that trigger the same symptoms, but they would have a different underlying cause.
If you see an error like
[resource_not_found_exception] task [random-task-identifier] isn't running and hasn't stored its results
you need to check the Elasticsearch logs and see what the actual cause is. The Kibana logs are unlikely to provide you with details on the root cause.
Thanks @tvernum!
Taking a look at ES logs I determined I had an overly broad index template, e.g. "index_patterns": ["*"] that conflicted with what Kibana was attempting to do. I was able to update my template to be more specific and then the Kibana update to 6.5 went through.
https://github.com/elastic/kibana/pull/25761 fixes; public docs have been updated.
"Delete the .kibana_1 and .kibana_2 indices that were created"
This is potentially harmful instruction, gentlemen, this must not exists at all
With this you probably gonna loose your Dashboards, Visualisation, Searchers etc
And yes, no need to have
_xpack.security.enabled_
for it at all.
This is how kibana upgrade looks like at least from 5.8.0 to 5.8.3
In /etc/kibana/kibana.yml
kibana.index: ".kibana"
-----The migration begin here--
Thats it!
The result is .kibana_1 is index and .kibana is an alias.
Dropping .kibana_1 you have your kibana settings gone,
Most helpful comment
This particular problem cannot happen with security off - it's specifically caused by a mismatch between the required permissions and the actual permisisons.
There may be other things that trigger the same symptoms, but they would have a different underlying cause.
If you see an error like
you need to check the Elasticsearch logs and see what the actual cause is. The Kibana logs are unlikely to provide you with details on the root cause.