Kibana: [migration v6.5] Another Kibana instance appears to be migrating the index

Created on 9 Nov 2018  Â·  28Comments  Â·  Source: elastic/kibana

Kibana version: 6.5.0

Elasticsearch version: 6.5.0

Server OS version:

Browser version:

Browser OS version:

Original install method (e.g. download page, yum, from source, etc.):
From source

Describe the bug:
I'm using Elasticsearch and Kibana in v6.4.3 and i'm testing to migrate to v6.5.0.
When i start Kibana for the fist time un v6.5.0 i stop the process during the migration and i have an empty browser page for Kibana

Steps to reproduce:
To reproduce, i start kibana and i stop when the logs where on this stage:

  log   [14:00:01.131] [info][migrations] Creating index .kibana_2.
  log   [14:00:01.221] [info][migrations] Reindexing .kibana to .kibana_1

At that point the response of aliases cat request are :

.security .security-6 - - -

and if i try to restart the Kibana service, i have an empty page in the browser and in the logs i have this message:

log   [14:00:20.457] [warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana.

capture d ecran 2018-11-09 a 10 53 45

I delete .kibana_2 index as mentioned in the logs, using this Curl request:

curl -XDELETE 'http://localhost:9200/.kibana_2'  --header "content-type: application/JSON" -u elastic -p

I restart Kibana and i have this message:

[warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

I delete .kibana_1 index as mentioned in the logs, using this Curl request:

curl -XDELETE 'http://localhost:9200/.kibana_1'  --header "content-type: application/JSON" -u elastic -p

before deleting the index .kibana_1 we need to verify that in my elasticsearch server i have the index .kibana ?
I ask this because if i understand .kibana_1 is the copy of .kibana and .kibana is deleted when the migration is finished. So if i delete as requested .kibana_1 and .kibana was already deleted i may lose all the Dashboards/visualization i have stored? i am right?

I restart Kibana and this time everything works, Kibana is back on the browser, and in the logs i have the logs:

[migration] finished in 688ms.

@bhavyarm
Expected behavior:

Screenshots (if relevant):

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

Any additional context:

Saved Objects Core bug

Most helpful comment

Hello, the instructions provided by @Timmy93 is destructive and you will lose all the Dashboards and Visualizations.

The migration process is explained at this documentation page.

All 28 comments

Pinging @elastic/kibana-operations

@CamiloSierraH - thanks for the report. This issue is caused by stopping the process which is in charge of handling the migration. This "locking" is to handle having multiple Kibana instances.

With this issue I see two possible problems:

  • We should add a note to "Reindexing .kibana to .kibana_X" stating not to stop the Kibana process.
  • The index name in the messaging for subsequent re-tries during of the re-index might not be correct.
    > log [14:00:20.457] [warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana.

There is more information available here: https://www.elastic.co/guide/en/kibana/current/upgrade-migrations.html

same issue in a docker/dev environment

Same issue - upgraded from 6.4.0 to 6.5.0 using DEB package - appears to be stuck on "Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."

Deleting .kibana_2 and restarting causes the same thing to happen, gets stuck on message above.

Kibana UI says "Kibana server is not ready yet" -- cannot access /status either, same message.

Same issue as @lnx01 upgrade from 6.4.x to 6.5.0

I have the same issue and was working on my test instance. Actually, I have no access to kibana.
Have you got a quick solution to find back my UI? Is it possible to downgrade ELK stack or just kibana?

@gmeriaux you need to follow this steps to have back your kibana instance -> https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html#known-issues-6.5.0

@gmeriaux I had success with just downgrading kibana and removing the indexes .kibana_1 and .kibana_2

@CamiloSierraH,@gheffern
thanks!!!
l am having trouble upgrading in the Windows environment(6.4.3 ⇨6.5.0)

.kibna2 indice delete after started kibna

There. was no. problem with version 6.5.1

I am having success upgrading in Windows environment(6.4.3⇨6.5.1)

Found a similar issue while upgrading. Turns out that was related to a .tasks closed index. Kibana was failing with an index_closed_exception this index is not usually used by Kibana (was closed automatically by curator a _long_ time ago).

I noticed that Kibana should be at full stop before deleting the indices. Although Kibana remained slow for a few minutes right before restarting - perhaps to reconstruct the both indices - it came up with all the data intact.

$ curl-XGET "https://localhost:9200/_cat/indices"| grep kibana
...
green open .kibana_2 kVo3hhokTP2hVUSfmPkGVA 1 0 181 0 184.2kb 184.2kb
green open .kibana_1 mHhRaCqKStq6bL1qRLxMVA 1 0 178 0 170.9kb 170.9kb

i have the error message: "Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."
after i use curl -XDELETE http://localhost:9200/.kibana_1 to delete the index, and restart kibana.,i get the same message.
the version of elk are all 6.5.4.
image

I've faced the same problem while upgrading from 6.4.2 to 6.5.4

I had the same problem migrating from 6.4.3 to 6.6.0.

I solved deleting the 3 indexes: .kibana, .kibana_1 and .kibana_2 and restarting the kibana service.

I used the following command from linux bash:
curl -X DELETE "localhost:9200/.kibana_2" && curl -X DELETE "localhost:9200/.kibana_1" && curl -X DELETE "localhost:9200/.kibana"

Hello, the instructions provided by @Timmy93 is destructive and you will lose all the Dashboards and Visualizations.

The migration process is explained at this documentation page.

I was upgrading kibana from 6.4.0 to 6.7.1. I had the same issue, so I deleted indices of .kibana_N. As @lucabelluccini mentioned, I lost all of my dashboards and index patterns.
I'm planning to upgrade one more to 7.0.0 but I really don't want to lose kibana objects again. Is there any way to deal with this issue without deleting indices?

I just found out there was another kibana instance on the same es cluster. All set after upgrading the rest! This was my bad. Plz plz plz make sure you don't have any other instance on the same cluster.

Same issue on upgrade, but now creating the index pattern is taking forever, to the point i'm concerned it isn't working at all.

@notque - for that issue I would recommend inspecting the ES logs as it's unrelated to Kibana migrations.

I ran into this issue as well when upgrading from 6.6.2 to 6.8.0 on 1 of 3 Kibana instances using the same ES cluster.

After stopping Kibana on all 3 and deleting the .kibana_2 index, I started the updated instance and kept seeing this in the logs:

kibana[8682]: {"type":"log","@timestamp":"2019-06-18T18:34:46Z","tags":["info","migrations"],"pid":8682,"message":"Creating index .kibana_2."}
kibana[8682]: {"type":"log","@timestamp":"2019-06-18T18:34:46Z","tags":["info","migrations"],"pid":8682,"message":"Migrating .kibana_1 saved objects to .kibana_2"}
kibana[8765]: {"type":"log","@timestamp":"2019-06-18T18:34:55Z","tags":["info","migrations"],"pid":8765,"message":"Creating index .kibana_2."}
kibana[8765]: {"type":"log","@timestamp":"2019-06-18T18:34:55Z","tags":["warning","migrations"],"pid":8765,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}

Instead of deleting the .kibana_2 index again, I updated the alias for .kibana to point to .kibana_2. This ended up solving the issue for me:

curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d'
{
    "actions" : [
        { "remove" : { "index" : ".kibana_1", "alias" : ".kibana" } },
        { "add" : { "index" : ".kibana_2", "alias" : ".kibana" } }
    ]
}
'

Just for curiosity: I restarted kibana too soon after upgrade from 7.0 to 7.2 and got stuck in „Kibana server is not ready" (+ finally found log entry that another kibana instance appears…). Fortunately message suggested which index should I delete.

It would really nice if kibana could pick up migration by itself.

This stuck state can also happen if Elasticsearch allocation is disabled when Kibana is upgraded

I had the same error.(kibana 6.8.2) 3 instances were running in my site, .kibana, .kibana_1 and .kibana_2. Followed the below steps:

1.Stop Kibana service.
2.Deleted .kibana_2 and .kibana index -
curl -XDELETE localhost:9200/index/.kibana_2
curl -XDELETE localhost:9200/index/.kibana
3.Then point .kibana_1 index to alias .kibana -
curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'
4.Restart Kibana service.
Kibana is loaded again successfully.

Pinging @elastic/kibana-platform (Team:Platform)

[elastic@sjc-v2-elk-l01 ~]$ curl localhost:9200
{
"name" : "master-1",
"cluster_name" : "sjc-v2",
"cluster_uuid" : "g-MOWUQGQMmgOUaCP0cdYA",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

LOGS

log [03:47:03.301] [info][savedobjects-service] Starting saved objects migrations
log [03:47:03.312] [info][savedobjects-service] Creating index .kibana_task_manager_1.
log [03:47:03.316] [info][savedobjects-service] Creating index .kibana_1.
Could not create APM Agent configuration: Request Timeout after 30000ms
log [03:47:33.313] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms
log [03:47:35.817] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/6jHlllmtTmGSJI3vco_KJw] already exists, with { index_uuid="6jHlllmtTmGSJI3vco_KJw" & index=".kibana_task_manager_1" }
log [03:47:35.818] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana.
log [03:47:35.828] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/xvwnY15cQaStFRV-qjMbaA] already exists, with { index_uuid="xvwnY15cQaStFRV-qjMbaA" & index=".kibana_1" }
log [03:47:35.829] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

I am getting same error, I deleted below indices and restarted, but same erro.

[elastic@sjc-v2-elk-l01 ~]$ curl localhost:9200/_cat/indices
red open .kibana_task_manager_1 6jHlllmtTmGSJI3vco_KJw 1 1
red open .apm-agent-configuration uD5uuI-nQa-qucX3wx3HIQ 1 1
red open .kibana_1 xvwnY15cQaStFRV-qjMbaA 1 1

Hello @shemukr
The indices are in red state and the problem seems not related to saved objects migration.
Please reach out on http://discuss.elastic.co/ (with the output of GET _cluster/allocation/explain).

This is currently the expected behavior when a migration fails, but will be improved by #52202 which will for automated retries of failed migrations without user intervention.

Was this page helpful?
0 / 5 - 0 ratings