Kibana: Kibana 7+ can't search saved_objects, FieldData error thrown on .kibana index

Created on 2 Jul 2019  路  20Comments  路  Source: elastic/kibana

Kibana version: 7.1.1

Elasticsearch version: 7.1.1

Original install method (e.g. download page, yum, from source, etc.):
Install from scratch with yum, generic configuration, 100% automatic.
Cluster ES 3 nodes

Describe the bug:

Kibana seems to have an issue on it's index .kibana at creation :
When trying to access the Saved Object, Kibana return 400 Bad Request error, and Elasticsearch throw FieldData error on .kibana index

I can create and find my index-pattern using the API, but Kibana isn't able to find them as it's search request got the FieldData exception.

NOTE : This issue seems a bit random, it happen on one out of the three clusters I created today (since we are in 7+), all created the same way with scripts.

NOTE : I found a post on the elastic forums where 6+ people seems to have the same behavior since 7+
https://discuss.elastic.co/t/kibana-7-cant-load-index-pattern/180167

I'll create more clusters tomorrow to observe more the frequency of this issue.

Provide logs and/or server output (if relevant):

Elastic log when I refresh the Saved Objects page :

[2019-07-02T11:08:48,327][DEBUG][o.e.a.s.TransportSearchAction] [elastic01] [.kibana][0],
node[RmpqDbnZTMmmrGTVe5sOZA], [R], s[STARTED], a[id=UOCFUQwpREy44aF76avXfw]:
 Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.kibana], 
indicesOptions=IndicesOptions[ignore_unavailable=false,

...

Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default.
Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted 
index. Note that this can however use significant memory. Alternatively use a keyword field 
instead. 

Index-pattern present in the saved object and curl GET work, but Kibana can't find it as it get hit by the FieldData error
curl -X GET "http://localhost:5601/api/saved_objects/index-pattern/filebeat-ulf" -H 'kbn-xsrf: true'

{"id":"filebeat-ulf","type":"index-pattern","updated_at":"2019-07-02T11:07:17.553Z","version":"WzUsMV0=","attributes":{"title":"filebeat-7.1.1-ulf-*","timeFieldName":"@timestamp"},"references":[],"migrationVersion":{"index-pattern":"6.5.0"}}

Saved Objects Core bug

Most helpful comment

for those finding this thread what I've done on my cluster to make it works:

  1. delete the kibana index
  2. recreate it with the correct mapping
PUT /.kibana
{                                                            

    "aliases": {},
    "mappings": {
      "properties": {
        "config": {
          "properties": {
            "buildNum": {
              "type": "long"
            }
          }
        },
        "index-pattern": {
          "properties": {
            "fields": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            },
            "timeFieldName": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            },
            "title": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        },
        "type": {
          "type": "keyword",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "updated_at": {
          "type": "date"
        }
      }
    },
    "settings": {
      "index": {
        "number_of_shards": "5",
        "number_of_replicas": "1"
      }
    }

}


(be sure the number of shards and replicas match your needs )

All 20 comments

Pinging @elastic/kibana-platform

39288 Seems to be facing the same issue

And another blog post :
https://discuss.elastic.co/t/not-possible-to-create-index-patterns-in-kibana/185591/2

Where the user, fixed it either by :

  • manually set fielddata=true on the type field of the ".kibana" index
  • manually editing the elasticsearch mapping for Kibana and reloaded .kibana index

I created another cluster (4th) same problem again

I tried to stop kibana, delete the .kibana index and start Kibana again here are the elastic logs :

  • Deleting index
[2019-07-03T03:02:16,659][INFO ][o.e.c.m.MetaDataDeleteIndexService] [elastic01]
[.kibana/1Z8-n6nCSza4pm2HXtWG_Q] deleting index
  • Starting Kibana
[2019-07-03T03:03:15,155][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elastic01]
 adding template [.management-beats] for index patterns [.management-beats]

[2019-07-03T03:03:15,820][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic01] 
[.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []

[2019-07-03T03:03:15,944][INFO ][o.e.c.m.MetaDataMappingService] [elastic01] 
[.kibana/x0ymkiGpRxWJA_rMJ-T3Nw] create_mapping [_doc]

[2019-07-03T03:03:15,945][INFO ][o.e.c.m.MetaDataMappingService] [elastic01] 
[.kibana/x0ymkiGpRxWJA_rMJ-T3Nw] update_mapping [_doc]

[2019-07-03T03:03:16,021][INFO ][o.e.c.r.a.AllocationService] [elastic01] 
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana][0]] ...]).
[2019-07-03T03:03:37,218][INFO ][o.e.c.m.MetaDataMappingService] [elastic01]
[.kibana/x0ymkiGpRxWJA_rMJ-T3Nw] update_mapping [_doc]

[2019-07-03T03:03:55,567][DEBUG][o.e.a.s.TransportSearchAction] [elastic01] [.kibana][0],
node[UKPhnQePR6-3EJMobt8mbw], [R], s[STARTED], a[id=oVInWbneRLicfKSIqL_uwA]: 
Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.kibana], 
indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true, 
expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, 
forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', 
preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=0, 
batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null,
getOrCreateAbsoluteStartMillis=-1, ccsMinimizeRoundtrips=true, source={"from":0,"size":20,"query":
{"bool":{"filter":[{"bool":{"should":[{"bool":{"must":[{"term":{"type":{"value":"index-
pattern","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":
{"type":{"value":"visualization","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":
{"type":{"value":"dashboard","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":
{"type":{"value":"search","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"
minimum_should_match":"1","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"seq_no_primary_ter
m":true,"_source":{"includes":["index-pattern","visualization","dashboard","search.title","index-
pattern","visualization","dashboard","search.id","namespace","type","references","migrationVersion",
"updated_at","title","id"],"excludes":[]},"sort":[{"type":
{"order":"asc","unmapped_type":"keyword"}}],"track_total_hits":2147483647}}]

org.elasticsearch.transport.RemoteTransportException: [elastic03][x.x.x.x:9300]
[indices:data/read/search[phase/query]]

Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default.
Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index.
Note that this can however use significant memory. Alternatively use a keyword field instead.

Edit :
I created another cluster (5th), (same script from scatch, VM creation included) and no error this time :thinking: I'll try to see if an election issue can cause this ?

Edit 2 :
Cluster No 6 had the issue again (same script from scatch, VM creation included)

On Node 3, I can see interesting logs :

The node had some errors for the first try at master election/joining, but still succeed to do it and bootstrap, and then the node report error when creating the .kibana index alias:

I removed the node ID / {ml.machine_memory=...., xpack.installed=true} from the logs to clear some noise and make it more readable

[2019-07-03T03:57:29,167][INFO ][o.e.c.c.JoinHelper] [elastic03] 
failed to join {elastic01} {x.x.x.x}{x.x.x.x:9300}
with JoinRequest{sourceNode={elastic03}{y.y.y.y} {y.y.y.y:9300}, 
optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode=
{elastic03}{y.y.y.y}{y.y.y.y:9300}, targetNode={elastic01}{x.x.x.x}{x.x.x.x:9300}}]}

org.elasticsearch.transport.NodeNotConnectedException: [elastic01][x.x.x.x:9300] Node not connected
        at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151) 
        ....

[2019-07-03T03:57:29,179][INFO ][o.e.c.c.Coordinator] [elastic03] 
setting initial configuration to VotingConfiguration{ID elastic01 ,{bootstrap-
placeholder}-elastic02,ID elastic03}

[2019-07-03T03:57:29,180][INFO ][o.e.c.c.JoinHelper] [elastic03] 
failed to join {elastic01}{x.x.x.x}{x.x.x.x:9300} 
with JoinRequest{sourceNode={elastic03}{y.y.y.y}{y.y.y.y:9300}, 
optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode=
{elastic03}{y.y.y.y}{y.y.y.y:9300}, targetNode={elastic01}{x.x.x.x}{x.x.x.x:9300}}]}

org.elasticsearch.transport.NodeNotConnectedException: [elastic01][x.x.x.x:9300] Node not connected
        at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151) 
        ....

[2019-07-03T03:57:29,318][INFO ][o.e.c.s.MasterService] [elastic03] 
elected-as-master ([2] nodes joined)[{elastic03}{y.y.y.y}{y.y.y.y:9300} elect leader,
{elastic01}{x.x.x.x}{x.x.x.x:9300} elect leader,
 _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 1, reason: master node changed
{previous [], current [{elastic03}{y.y.y.y}{y.y.y.y:9300}}]}, added {{elastic01}{x.x.x.x}{x.x.x.x:9300},}

[2019-07-03T03:57:29,410][INFO ][o.e.c.c.CoordinationState] [elastic03]
 cluster UUID set to [oQs2zr6XTM6spzQSvJ079w]

[2019-07-03T03:57:29,463][INFO ][o.e.c.s.ClusterApplierService] [elastic03]
 master node changed {previous [], current [{elastic03}{y.y.y.y}{y.y.y.y:9300}]}, 
added {{elastic01}{x.x.x.x}{x.x.x.x:9300},}, term: 2, version: 1, reason: Publication{term=2, version=1}

[2019-07-03T03:57:29,538][INFO ][o.e.h.AbstractHttpServerTransport] [elastic03]
publish_address {y.y.y.y:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}, {y.y.y.y:9200}

[2019-07-03T03:57:29,539][INFO ][o.e.n.Node] [elastic03] 
started

[2019-07-03T03:57:29,559][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [elastic03]
 Failed to clear cache for realms [[]]

[2019-07-03T03:57:29,618][INFO ][o.e.g.GatewayService] [elastic03] 
recovered [0] indices into cluster_state

...

[2019-07-03T03:57:30,255][INFO ][o.e.c.s.MasterService] [elastic03] 
node-join[{elastic02}{z.z.z.z}{z.z.z.z:9300} join existing leader], term: 2, version: 8, reason: added
{{elastic02}{z.z.z.z}{z.z.z.z:9300},}

[2019-07-03T03:57:30,543][INFO ][o.e.c.s.ClusterApplierService] [elastic03] 
added {{elastic02}{z.z.z.z}{z.z.z.z:9300},}, term: 2, version: 8, reason: Publication{term=2, version=8}

[2019-07-03T03:57:30,749][INFO ][o.e.l.LicenseService] [elastic03] 
license [] mode [basic] - valid

Cluster is now bootstrapped, but .kibana will throw some error :

[2019-07-03T03:57:52,002][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic03]
 [.kibana_task_manager] creating index, cause [auto(bulk api)], templates [.kibana_task_manager], shards
[1]/[1], mappings [_doc]

[2019-07-03T03:57:53,018][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic03] 
[.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]

[2019-07-03T03:57:53,279][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic03]
 [.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []

[2019-07-03T03:57:53,382][DEBUG][o.e.a.a.i.a.TransportIndicesAliasesAction] [elastic03]
 failed to perform aliases

org.elasticsearch.indices.InvalidAliasNameException: Invalid alias name [.kibana], 
an index exists with the same name as the alias
        at org.elasticsearch.cluster.metadata.AliasValidator.validateAlias(AliasValidator.java:93) 
       ...

@tbuchier thanks very much for the detailed bug report!

I just want to confirm, you've got a cluster of 3 ES nodes, how many Kibana nodes are you running or is it only a single one?

We bootstrap the cluster based on a golden image that have Kibana + Elastic

So 3 Kibana running (we may disable one and keep 2 for HA / load balacing later on).

The data folder of elastic are completly cleaned before instantiation (for a correct bootstrap)
But maybe not the /var/lib/kibana that contains the UUID so they may have the same. But it should only affect monitoring right ?

Could you post the logs of all three Kibana instances for a cluster that's in this error state?

I won't have access to the env until Monday ..
I remember that nothing was logged (as I had logging.quiet: true)
I'll publish Kibana log Monday.

I've found 3 others topic on elastic forums with users that seems to face the same issue :

All with 7+ , stuck in infinite creation of index-pattern via the UI as the UI can't find it's object saved in the index

https://discuss.elastic.co/t/created-index-pattern-is-not-visible-in-kibana-7-0-1/184098/

https://discuss.elastic.co/t/i-cant-create-indexes-patterns-with-eck/184194/

https://discuss.elastic.co/t/kibana-7-0-1-wont-lad-index-pattern/187934/

It appears as if some kind of race condition leads to the .kibana index having a mapping of {"type": {"type": "text"}} instead of {"type": {"type": "keyword"}}

I've tried numerous runs of creating a 3 node ES + Kibana cluster on my local machine but haven't been able to reproduce the mappings for the "type" property being setting to "text".

I can confirm that manually creating a mapping with {"type": {"type": "text"}} produces the symptoms described in this and the linked discuss threads like the "Fielddata is disabled on text fields by default" error.

Thank you so much for the detailed debugging help @tbuchier! Still reading through it, but out of curiosity, do you ping the Kibana server in a loop to figure out if it's started up in your script?

I've seen this happen once before, and the random factor implies to me that it's some kind of race condition, but what could be racing? I'm assuming that it's the migration completion racing against a request coming into the Kibana server, which (if security is enabled) attempts to load the uiSettings service, which will auto-create the config saved object before the .kibana index is actually created, causing the index to be created by auto-mapping the input and using {"type": "text"} for the type field...

This didn't used to be possible because we didn't even accept HTTP requests until migrations had completed, but with the transition to the new platform that order of operations has changed slightly and now the migrations are executed after HTTP has started, meaning that routes can be triggered before the savedObjects service is actually usable, potentially causing issues based on timing.

edit: one way we could verify this is by dumping the mapping and documents in the .kibana index when this error is encountered. If the index contains nothing by a config document then I'm pretty sure this is what's happening.

I was able to reproduce this issue in 7.1.1 environment. Cluster details:

  • Elasticsearch 7.1.1 with 6 data nodes + 3 dedicated masters + 2 coordinating only nodes
  • Kibana 7.1.1 is configured to talk to 2 coordinating only nodes (elasticsearch.hosts setting). Kibana has 4 spaces.
  • Security is enabled on the cluster with the native and LDAP authentication realm, SSL is configured for both TCP and HTTP on Elasticsearch cluster as well as on Kibana.

We first ran into this issue when due to a hardware failure, we had to stop all Elasticsearch nodes (Kibana was not stopped though). Delete all contents in the data directory of all Elasticserch nodes. Started all Elasticsearch nodes back again. Kibana was not stopped during the full cluster restart.

We were able to reproduce this issue by deleting .kibana* index without stopping Kibana service.

To fix this issue we took the following steps:

  • Shutdown Kibana service
    -Deleted .kibana* indices. We decided to delete .kibana* indices as there was no data in .kibana indices.
  • Started Kibana service.

Hello !!

I spawned clusters this morning until I faced the issue again (the third one) :

@rudolf
For the kibana logs : it's seems to be a race problem indeed :

kibana_1 and kibana_2 are created , on Kibana 1, I have an error about :

Invalid alias name [.kibana], an index exists with the same name as the alias

and all kibana have :

Another Kibana instance appears to be migrating the index. Waiting for that migration to complete.

kibanalog.txt

@spalger
For the .kibana mapping : It seems empty indeed :

mapping_kibana.txt

mapping_kibana_1.txt

mapping_kibana_2.txt

Edit : The step mentioned by @navneet83 :

  • stopping all kibana
  • deleting index .kibana, .kibana_1, .kibana_2
  • Then starting only 1 kibana
    fix the issue.

To fix it in our script, we enable only 1 Kibana at bootstrap, and once .kibana_1 is created successfully, the script launch the other instances.

@tbuchier I've been able to reproduce the issue and it is indeed as spalger guessed a race condition with the migration system. We block all operations against Elasticsearch until initialization of indexes and migrations are complete. A logic bug allowed operations to proceed even if initialization and migration was still in progress. This caused some plugins to start writing to the .kibana index and Elasticsearch would automatically create an index with incorrect mappings.

The good news is that this has been fixed and released in 7.2.0 (https://github.com/elastic/kibana/pull/37674)

Thanks for your help in debugging this and for linking all the discuss topics to this issue!

@rudolf Hi, I am facing this issue in 7.2.0 as well. Kibana repetitively asks for index pattern and es log gives fielddata error.
"Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [process.name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.", "at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:711) ~[elasticsearch-7.2.0.jar:7.2.0]", "at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:116) ~[elasticsearch-7.2.0.jar:7.2.0]",

@ntsh999 We use github for reproducible bug reports only. If you can reproduce this behaviour on 7.2 please open a new issue on github and share the steps. However, if you're looking for help please start a new topic on our discuss forums https://discuss.elastic.co/ Please include all the logs from elasticsearch and kibana as well as any other relevant information such as how you created the cluster and if you had done any upgrades from earlier versions of the ELK stack.

for those finding this thread what I've done on my cluster to make it works:

  1. delete the kibana index
  2. recreate it with the correct mapping
PUT /.kibana
{                                                            

    "aliases": {},
    "mappings": {
      "properties": {
        "config": {
          "properties": {
            "buildNum": {
              "type": "long"
            }
          }
        },
        "index-pattern": {
          "properties": {
            "fields": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            },
            "timeFieldName": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            },
            "title": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        },
        "type": {
          "type": "keyword",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "updated_at": {
          "type": "date"
        }
      }
    },
    "settings": {
      "index": {
        "number_of_shards": "5",
        "number_of_replicas": "1"
      }
    }

}


(be sure the number of shards and replicas match your needs )

@allan-simon Fantastic! That worked great for me!

@allan-simon thanks as well, you saved my evening.

@allan-simon Cheers! Spent ages trying to figure this out on AWS Elasticsearch service tonight before finding your post which worked perfectly!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

treussart picture treussart  路  3Comments

cafuego picture cafuego  路  3Comments

timroes picture timroes  路  3Comments

spalger picture spalger  路  3Comments

stacey-gammon picture stacey-gammon  路  3Comments