Kibana: Missing mappings

Created on 2 Mar 2015  Â·  46Comments  Â·  Source: elastic/kibana

Since upgrading to Kibana 4 (or is it ES 1.4.4 that's the problem? not sure...), I've not been able to graph or analyse any of my fields outside of the standard "syslog" type fields

Anything I've added through Grok is in ES but when I mouse over the fields in Kibana 4, I get: no cached mapping for this field. When I refresh field mappings from the settings page, nothing changes.

Anyone else having these issues?

feedback_needed not reproducible

Most helpful comment

@pall-valmundsson just re-read your Apr 2 narrowing of the problem. Is the issue the value for indexname.settings.index.creationdate? Or is it in the timestamp values in the field selected for Time-field name when configuring an index pattern in Settings? Or is it an issue with the number of indices I'm maintaining in kibana (~8)?

My use case calls for timestamps in the future (i.e. tracking data around scheduled events) so if it is the selected field problem, a change in functionality or a config option in the vein of allow_events_in_the_future = true would be more desirable.

However, this is currently happening on new indices without time values. Here is the problem in the smallest form I could reproduce:

  • the index imports in Settings and recognizes the numerical field mapping
  • in Discover, I see 1/1 records, but they have a warning flag which instructs to "Refresh your mapping from the Settings > Indices page"
  • in Settings > Indices I can "Reload field list" (which seems to have no effect) but I don't see anything labeled "Refresh"
    image

All 46 comments

What version did you upgrade from? Try deleting the index pattern and recreating it. Don't worry, none of your linked objects will be deleted and all will re-bind when you recreate the pattern.

I upgraded from

  • ES 1.4.1
  • Kibana 4beta3

What do I do in order to delete the index pattern?

[edit]
If you mean removing the [logstash]-dd-mm-yyyy pattern and re-creating it, that doesn't seem to have done anything.

I'll try removing the .kibana index and see if that does anything.

[edit 2]

Nope, no dice. Still the same problem. The "stock" fields seem to be there but nothing is being cached for any of the ones LS generates

Upgrades from the betas are not supported. Can you post your elasticsearch mapping? Are you sure the fields are being created?

I'm not sure what "stock" fields are. Kibana doesn't know anything about the logstash schema.

Hmm, the result of the /_all/_mappings/ call is about 300k lines, so a bit overblown for posting here.

The fields _are_ being created because Kibana 3 is still absolutely fine.
In addition, Kibana4 can _see_ them, but it's not caching the fields

See below:

json { "_index": "logstash-2015.03.03", "_type": "nginx", "_id": "AUvgVWg_NKFI3n7i6FUN", "_score": null, "_source": { "message": "195.171.18.227 - - [03/Mar/2015:15:51:08 +0000] \"GET /common/scripts/menu.js HTTP/1.1\" 200 148 \"https://go.ourwebsite.com/\" \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)\"", "@version": "1", "@timestamp": "2015-03-03T15:51:08.000Z", "file": "/var/log/nginx/website-ssl.access", "host": "en-lb", "offset": "317740801", "type": "nginx", "tags": [ "website", "http" ], "client_ip": "195.171.18.227", "http_user": "-", "http_action": "GET", "http_request": "/common/scripts/menu.js?rnd=1425397868259", "http_version": "1.1", "request_duration": 0, "http_gzip_ratio": 0, "body_bytes_sent": 0, "http_referer": "https://go.ourwebsite.com/", "http_user_agent": "\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)\"", "http_status_code": "200", "timestamp": "03/Mar/2015:15:51:08 +0000", "clientprefix": "en", "geoip": { "ip": "195.171.18.227", "country_code2": "GB", "country_code3": "GBR", "country_name": "United Kingdom", "continent_code": "EU", "latitude": 54, "longitude": -2, "timezone": "Europe/London", "location": [ -2, 54 ] } }, "fields": { "@timestamp": [ 1425397868000 ] }, "highlight": { "file": [ "/var/log/@kibana-highlighted-field@nginx@/kibana-highlighted-field@/website-ssl.access" ], "type": [ "@kibana-highlighted-field@nginx@/kibana-highlighted-field@" ], "type.raw": [ "@kibana-highlighted-field@nginx@/kibana-highlighted-field@" ] }, "sort": [ 1425397868000 ] }```

Additionally, here's what I see in Kibana4:

tooltip_093

We don't need the mapping from _all, just one of the affected indices

It affects all indices that Kibana tries to access. Nothing has changed in the index structure and the templates are managed by Logstash itself.

Interestingly, this only occurs when I use the "Use event times to create index names" flag when setting the index for Kibana.

However, if I _don't_ use that flag, the fields are detected BUT Kibana slows to a crawl and times out on practically every query.

Is there anywhere I can look for more detailed logging?

I understand it is affecting all of the indices, its possible there is a bug in the mapping parser so I'd like to attempt to reproduce the situation, so a sample of the mapping of one of the indices would be really useful here.

Seeing the same issue here. Parts of our mapping (over 100KB) are dynamic, but the core of it looks like this;

Mappings
http://pastebin.com/PTKuiDde

to add to my previous comment I also took a look at the list of fields found under settings -> indices and there are quite a few missing that show up in the mapping.

Here's a sample of an affected index:

http://pastebin.com/VhmJJURw

@opennomad I can see the issue with your mapping, you start your root objects with an _. Underscores have a special meaning in elasticsearch that it does not strictly enforce. Kibana however needs to know where to find fields, so it enforces this. You'll need to rename those roots and omit the leading underscore

@poolski can you also supply the cached mapping? It is available via GET /.kibana/index-pattern/some-index

@rashidkpc I can't seem to get any index-specific cached mappings from the .kibana index. Here are the mappings I pulled out using Kopf:

json { "config": { "properties": { "defaultIndex": { "type": "string" }, "discover:sampleSize": { "type": "string" }, "buildNum": { "type": "long" }, "histogram:maxBars": { "type": "string" }, "histogram:barTarget": { "type": "string" } } }, "index-pattern": { "properties": { "title": { "type": "string" }, "customFormats": { "type": "string" }, "intervalName": { "type": "string" }, "timeFieldName": { "type": "string" }, "fields": { "type": "string" } } }, "dashboard": { "properties": { "kibanaSavedObjectMeta": { "properties": { "searchSourceJSON": { "type": "string" } } }, "title": { "type": "string" }, "hits": { "type": "integer" }, "description": { "type": "string" }, "panelsJSON": { "type": "string" } } }, "visualization": { "properties": { "kibanaSavedObjectMeta": { "properties": { "searchSourceJSON": { "type": "string" } } }, "title": { "type": "string" }, "description": { "type": "string" }, "visState": { "type": "string" }, "savedSearchId": { "type": "string" } } }, "search": { "properties": { "kibanaSavedObjectMeta": { "properties": { "searchSourceJSON": { "type": "string" } } }, "title": { "type": "string" }, "hits": { "type": "integer" }, "description": { "type": "string" }, "columns": { "type": "string" } } } }```

I am experiencing this exact same situation.

What can I provide to help with the investigation?

Here too. I'm seeing a bunch of fields that aren't in the mapping, and refreshing the field index doesn't do anything. I recently upgraded to 4.0.1 from 4.0.0-beta3, but haven't ever tried to refresh mapping updates before so not sure if it worked in beta3.

I've also hit this problem after upgrading to 4.0.1 where my kibana is struggling to index anything. I havent tried rolling back, I upgraded from 4 RC1. I have deleted my .kibana index and tried to recreate it but it fails to index at all, so now I got no index.

edit: I've rolled back to kibana 4 rc1 and still have the problem so it must be the elasticsearch upgrade that caused this, but thats odd as I can browse the data items with elasticsearch head

@dariusjs in our case, we tried reconfiguring to a new .kibana index ( .kibana_test ), it created it properly, and for a brief moment, all the JSON derived fields were back. After that, they vanished again.

We also tried deleting the logstash.YYYY.MM.DD index and recreating it and had the same thing happen.

I'm happy to provide any information that can help!

@rashidkpc - is there any information - kibana config, elasticsearch type mapping, etc I could get your way that woud help with this issue?

Note - in the case of our specific instance of this, we've solved it. In case anyone else runs into this: logs from hosts with incorrect timestamps set in the future play very badly with how kibana determines field popularity.

taoetek, in my case though after deleting the .kibana index I still get no new indexing happening, I just have an empty index

After troubleshooting there is something I cannot explain that was causing my setup to break when its been working fine for over a year. The indices were creating ok, but my 4 node elasticserach was acting up. My first node was basically refusing to join the cluster and kept timing out while the other nodes were showing cluster health green, 4 nodes online. I've disabled elasticsearch on the primary node and things just suddenly work, reinstallation of elasticsearch and installing a fresh config and zapping /var/lib/elasticsearch didnt help. I would say I dont have any issue reported as per the original report here.

@poolski try GET /.kibana/index-pattern/_search?pretty

@rashidkpc, here you go:

{
  "took": 405,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "hits": {
    "total": 1,
    "max_score": 1,
    "hits": [
      {
        "_index": ".kibana",
        "_type": "index-pattern",
        "_id": "[logstash-]YYYY.MM.DD",
        "_score": 1,
        "_source": {
          "title": "[logstash-]YYYY.MM.DD",
          "timeFieldName": "@timestamp",
          "intervalName": "days",
          "customFormats": "{}",
          "fields": "
[{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"logsource","count":0,"scripted":false},
{"type":"string","indexed":false,"analyzed":false,"name":"_source","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"type","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"envtype","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"@version","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"timestamp","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":false,"name":"_type","count":0,"scripted":false},
{"type":"string","indexed":false,"analyzed":false,"name":"_id","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"file","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"offset","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"instance","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"tags","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"host","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"syslog_severity","count":0,"scripted":false},
{"type":"number","indexed":true,"analyzed":false,"doc_values":false,"name":"syslog_severity_code","count":0,"scripted":false},
{"type":"string","indexed":false,"analyzed":false,"name":"_index","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"clientprefix","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"pid","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"message","count":0,"scripted":false},
{"type":"date","indexed":true,"analyzed":false,"doc_values":false,"name":"@timestamp","count":0,"scripted":false},
{"type":"number","indexed":true,"analyzed":false,"doc_values":false,"name":"syslog_facility_code","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"syslog_facility","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"program","count":0,"scripted":false}]"
        }
      }
    ]
  }
}

Interesting, I still can't reproduce this or find a reason it would happen.

@kyrill, I also had the same response from this specific elasticsearch node
I was dealing with. Do you have another one in your cluster that you can
connect Kibana to.

On Thursday, 26 March 2015, Kyrill [email protected] wrote:

@rashidkpc https://github.com/rashidkpc, here you go:

{
"took": 405,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": ".kibana",
"_type": "index-pattern",
"_id": "[logstash-]YYYY.MM.DD",
"_score": 1,
"_source": {
"title": "[logstash-]YYYY.MM.DD",
"timeFieldName": "@timestamp",
"intervalName": "days",
"customFormats": "{}",
"fields": "
[{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"logsource","count":0,"scripted":false},
{"type":"string","indexed":false,"analyzed":false,"name":"_source","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"type","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"envtype","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"@version","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"timestamp","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":false,"name":"_type","count":0,"scripted":false},
{"type":"string","indexed":false,"analyzed":false,"name":"_id","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"file","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"offset","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"instance","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"tags","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"host","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"syslog_severity","count":0,"scripted":false},
{"type":"number","indexed":true,"analyzed":false,"doc_values":false,"name":"syslog_severity_code","count":0,"scripted":false},
{"type":"string","indexed":false,"analyzed":false,"name":"_index","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"clientprefix","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"pid","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"message","count":0,"scripted":false},
{"type":"date","indexed":true,"analyzed":false,"doc_values":false,"name":"@timestamp","count":0,"scripted":false},
{"type":"number","indexed":true,"analyzed":false,"doc_values":false,"name":"syslog_facility_code","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"syslog_facility","count":0,"scripted":false},
{"type":"string","indexed":true,"analyzed":true,"doc_values":false,"name":"program","count":0,"scripted":false}]"
}
}
]
}
}

—
Reply to this email directly or view it on GitHub
https://github.com/elastic/kibana/issues/3226#issuecomment-86419503.

Regards,
Darius Jan Seroka
[email protected]

I'm having this issue as well. I refreshed mappings for other index patterns that hadn't been refreshed in a while and saw a big drop in field count. After the initial drop in count it looks like it's consistent, i.e. fields aren't appearing and disappearing. Is there any good way of debugging the field mapping code in Kibana to see why it's skipping fields?

I figured this out, at least for my case. Kibana only seems to check field mappings for the 5 "latest" indices in a time-based index pattern. I had some syslog clients with a severly skewed clock and had 8 indexes in the future which contained much older data and mappings. I deleted those indices, refreshed, and I got more fields.

@pall-valmundsson thanks! That fixed it in my case, too.

similar problem. I can't get kibana 4.0.1 to recognize long mappings that it recognized in 4RC1

@pall-valmundsson This fixed my problem as well. I was accidentally overwriting the timestamp variable to a future time. Once I deleted the indices that had a future timestamp my values began to map correctly.

@pall-valmundsson @ahahtyler - that doesn't explain why kibana4-beta4 didn't have this issue. I can't imagine that the workings of the timestamp parer have been altered significantly in the release version.

@poolski I don't know the codebase (or javascript for that part :) very well but a quick search suggests that https://github.com/elastic/kibana/commit/f051742a3f5e9613e918f6f1fb0ccd43a4671f93 might be the commit that introduced the issue, line 53 caught my attention. It's in beta3 onwards. Have you explicitly verified that beta4 (which doesn't seem to be tagged) does not have the issue.

I've had a closer look at our infrastructure and it does indeed look like something flipped out and generated a whole bunch of indices that were far ahead in the future which didn't have any mappings.

It might be worth documenting this somewhere as it's not quite a bug, but it'll catch people unawares.

@rashidkpc, I don't know if you've been following this issue since you marked it 'not reproducible' but we seem to have figured out the issue and I think it should be addressed in some form, either by documentation or a change in functionality (or even both).

Even I am facing the same issue. Any idea what change is required - if its configuration change - I will try it out.

Faced same problem

@pall-valmundsson just re-read your Apr 2 narrowing of the problem. Is the issue the value for indexname.settings.index.creationdate? Or is it in the timestamp values in the field selected for Time-field name when configuring an index pattern in Settings? Or is it an issue with the number of indices I'm maintaining in kibana (~8)?

My use case calls for timestamps in the future (i.e. tracking data around scheduled events) so if it is the selected field problem, a change in functionality or a config option in the vein of allow_events_in_the_future = true would be more desirable.

However, this is currently happening on new indices without time values. Here is the problem in the smallest form I could reproduce:

  • the index imports in Settings and recognizes the numerical field mapping
  • in Discover, I see 1/1 records, but they have a warning flag which instructs to "Refresh your mapping from the Settings > Indices page"
  • in Settings > Indices I can "Reload field list" (which seems to have no effect) but I don't see anything labeled "Refresh"
    image

"Or is it in the timestamp values in the field selected for Time-field name when configuring an index pattern in Settings?"

Yes. I think that's my issue at least.

Since _index, _id, _source and _type should be in all documents I'm guessing you're experiencing some other issue.

I faced the same issue with LS 1.4.2. Refreshing the particular index did not help. However when I refreshed the main or the parent index i.e. logstash-*, it worked fine! The no "cached mapping error" went away.

I am also facing same issue.
Now i am upgraded to Kibana 4.1.*. Now every thing working for me.

Have had this issue with 4.0* builds but just uppgraded to 4.1 and all issues where automagically resovled, not that I like that I have spend several days debugging but just glad I can see all my cached indexe variables now.

Since we've had 2 reports that this is fixed in 4.1 I'm going to close this. Cheers

Hi rashidkpc!
I run kibana 4.1.1 and just get this issue.

Issue happened when I set fixed mapping for 2 fields sender_address and recipient_address to string + conversion to lowercase.

I noticed my saved filters are not working so I decided to rebuild indexes - at first mappins were missing, I decided to clear my browsers cache and restart kibana. Now the docs where changed fields were present are not visible to kibana but mappings are still there :

http://pastebin.com/pGMEPDa2

I'm using 4.1.1 and this issue still seems present. These two pictures should illustrate.

Please re-open. This has not been fixed. 'Kibana only seems to check field mappings for the 5 "latest" indices in a time-based index pattern.' issue still seems to be there. And in my case, there is no clock skew, but rather, I was adding older events into time ranges further than 5 days ago.

Refresh mappings using index pattern logstash-* and all fields (199) are found

image

Refresh mappings using index pattern daily, i.e. [logstash-]YYYY.MM.DD and it won't find new fields.

image

In my case, the new document type 'apache-access' was for a date range up until 2015/08/12, with no more apache docs dated after that. I attempted to refresh the fields on 2015/08/17, and the [logstash-]YYYY.MM.DD index pattern failed to look far back enough to see the new fields (for older events).

Okay, just saw that under advanced settings, you can change the default and extend how far back it looks. This fixed my problem:

image

Was this page helpful?
0 / 5 - 0 ratings