鈩癸笍 You can find an explanation why this is not working and what is and what isn't working around 64 bit numbers in Kibana in the following issue: https://github.com/elastic/kibana/issues/40183
The following screenshot shows some events in a standard table panel. Notice the values for the column "datumlop".

Using the query genererated by the inspect tool I get the following json data from Elasticsearch:
{
"took" : 18,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 258,
"max_score" : null,
"hits" : [ {
"_index" : "logstash-2014.06.02",
"_type" : "amoeba-event",
"_id" : "xdTn3af8Rg-SE0zL3dRERQ",
"_score" : null, "_source" : {"event":"LevFrUppdaterad","bestNr":"3_4_in_fnr2","datumlop":20140430000191426,"objekttyp":"31A","trtyp":"INSERT","objektId":{"id":"c0f9fba5-1c91-4369-89ad-e323d87575b1"},"@version":"1","@timestamp":"2014-06-02T17:08:00.194Z","host":"myhost","path":"/app/amoeba/events.log","type":"amoeba-event"},
"sort" : [ 1401728880194, 1401728880194 ]
}, {
"_index" : "logstash-2014.06.02",
"_type" : "amoeba-event",
"_id" : "SZA-pLboQgWb1Tag3zupeA",
"_score" : null, "_source" : {"event":"LevFrUppdaterad","bestNr":"3_4_in_fnr2","datumlop":20140430000191424,"objekttyp":"31A","trtyp":"INSERT","objektId":{"id":"63860192-3a34-4132-aa84-23d969381dea"},"@version":"1","@timestamp":"2014-06-02T17:08:00.148Z","host":"myhost","path":"/app/amoeba/events.log","type":"amoeba-event"},
"sort" : [ 1401728880148, 1401728880148 ]
}, {
"_index" : "logstash-2014.06.02",
"_type" : "amoeba-event",
"_id" : "b9v5c0XjTBuoNH-dF7IUjg",
"_score" : null, "_source" : {"event":"LevFrUppdaterad","bestNr":"3_4_in_fnr2","datumlop":20140430000191422,"objekttyp":"31A","trtyp":"INSERT","objektId":{"id":"c73ed30b-7a1b-4352-bc09-824bccb105b3"},"@version":"1","@timestamp":"2014-06-02T17:08:00.103Z","host":"myhost","path":"/app/amoeba/events.log","type":"amoeba-event"},
"sort" : [ 1401728880103, 1401728880103 ]
}, {
"_index" : "logstash-2014.06.02",
"_type" : "amoeba-event",
"_id" : "VFABxKF9T0yfEt5xyIaSWw",
"_score" : null, "_source" : {"event":"LevFrUppdaterad","bestNr":"3_4_in_fnr2","datumlop":20140430000191420,"objekttyp":"31A","trtyp":"INSERT","objektId":{"id":"8626b062-6cbd-4e8d-85a0-cb32b6bc26f9"},"@version":"1","@timestamp":"2014-06-02T17:08:00.056Z","host":"myhost","path":"/app/amoeba/events.log","type":"amoeba-event"},
"sort" : [ 1401728880056, 1401728880056 ]
}, {
"_index" : "logstash-2014.06.02",
"_type" : "amoeba-event",
"_id" : "lnudOdJ6QOmR6fTB-YExZA",
"_score" : null, "_source" : {"event":"LevFrUppdaterad","bestNr":"3_4_in_fnr2","datumlop":20140430000191418,"objekttyp":"31A","trtyp":"INSERT","objektId":{"id":"172fb4ff-2709-444d-a0a4-daf54e3541da"},"@version":"1","@timestamp":"2014-06-02T17:08:00.009Z","host":"myhost","path":"/app/amoeba/events.log","type":"amoeba-event"},
"sort" : [ 1401728880009, 1401728880009 ]
} ]
}
}
As you can see the values in the field "datumlop" does not correspond to what is presented i Kibana.
I'm using Kibana 3.1.0 downloaded from elasticsearch.org i Firefox 26.0 running on Linux.
This is due to the datumlop field being a 64bit integer. Unfortunately javascript can't handle numbers this large. We're working on a solution, but we're not quite there yet. I'm going to change the title of this issue to reflect this fact.

I see. What would be the best workaround until a solution is in place?
Maybe if we begin logging datumlop as strings instead? Will there be a problem if the old events have integer datumlops and new events have strings?
Storing datumlop as a string would help. Simply start sending them to Elasticsearch in quotes and when midnight rolls over logstash will create a new index with them as strings. The downside is that it will be sorted as a string as well.
This issue likely still exists in Kibana 4 for the same reasons it existed in Kibana 3.
I'm being bitten by the same issue on 5.0.0-alpha3. I assume there are no good solutions for this except storing these big long values as strings...
+1 for this issue, kibana-5.4.3
+1 for this issue.
+1 for this issue
+1 for this issue
Given that Kibana's console has become the de facto tool to interact with Elasticsearch and create bug recreations, we have got quite a number of bug reports that often looked like data corruption (not the best kind of bugs) but that actually boiled down to this issue. A fix or a way to more easily identify numbers that haven't been displayed with full precision would be greatly appreciated.
a way to more easily identify numbers that haven't been displayed with full precision
For the record, I was thinking about something like Number.isInteger(x) && !Number.isSafeInteger(x)) to indicate that a number likely lost some precision. I won't cover all cases but I think it would cover the majority of cases, ie. longs that use more than 53 bits?
+1 for this issue, kibana-6.3.0
for a workaround you can use the string formatter for that number and make sure it is in quotes in the _source:
PUT test-nano
{
"mappings": {
"doc": {
"properties": {
"nano": {
"type": "long"
}
}
}
}
}
PUT test-nano/doc/1
{
"nano": "1532006837514634921"
}
+1 for this issue, Kibana-6.6.0
Not having full precision timestamps in Kibana makes it display log lines in undefined order: if multiple lines have the same millisecond, Kibana, when sorting by timestamp, doesn't respect the ordering of those messages. It makes Kibana a poor system for displaying logs.
A detailed explanation of why 64 bit numbers cause problems, and a list what's working to what extend and what not, can be found in https://github.com/elastic/kibana/issues/40183.
+1 for this issue, kibana6.0.0
Most helpful comment
for a workaround you can use the
stringformatter for that number and make sure it is in quotes in the_source: