Inside one of the indices of my installation of elasticsearch, there are documents of about one megabyte each. Kibana tries to load (default 500) all documents and in my case this is a problem in that it is about 500 megabytes of network traffic.
Can you make a selection of the fields to be loaded in order to eliminate huge fields?
We've talked about adding a field level setting saying "ignore this field in Kibana", this would prevent fields from being loaded by discover or being usable in visualize. Does that sound like it might work?
hi guys we have developed a quick patch for this issue allowing to specify the source filtering patters (and then showing checkboxes for the fields which are excluded or included) it looks like this:


we could provide it if you like (though we really did a quick thing, and editing patterns by end may not be everyone's cup of tea)
@jccq I like the idea of exposing source filtering at the index pattern level.
This is different from disabling a single field, as it would still be available for aggregating and such, maybe that's all we need..
probably better to allow the "saved search"/discover table to only load specific fields.
@jccq where i can get the source filtering patch
@tulipmind please see the patch at https://github.com/scampi/kibana/commit/b2b9ef8e5e930b7ae89412841d308158a57f3bc5 it was tested against the 4.1 branch.
This adds a third tab to the index view in settings, as well as a column that indicates if that field is retrieved or not. You set the source filtering configuration in the new tab, there are some examples there (basically the JSON object at then end of the page https://www.elastic.co/guide/en/elasticsearch/reference/master/search-request-source-filtering.html).
When you update the configuration, the retrieved column on the first tab is updated automatically.
You can test it by adding a vizualisation based on a saved search over the index you just configured. If you inspect the network, you will see that the _msearch query includes the source filtering clause you just added.
I had a repro with Documents around ~500kb and retrieving them in Discovery page. I have attached a document, trying locally with an elasticsearch instance and the Kibana instance in the same host hangs. I have attached the document (example.txt).
Here is the resource usage from the developer console:

Looks like the Javascript Heap is taking a lot of time to process, probably related to loading the document into memory.
Another option for this could be to create a similar short dot filter (https://github.com/elastic/kibana/blob/master/src/ui/public/filters/short_dots.js#L17) to truncate messages.
This is the CPU usage, and time spent:

May I ask if there was any progress on this issue? Seems like there was agreement that something needed to be done but I am not sure if the problem was addressed or not.
@javanna it was not addressed, and we really haven't reached consensus on how to address it.
@javanna @spalger
I have implemented a new transform function for large strings a while ago. This will just add an ellipsis and will truncate the string when is being shown in the table view: https://github.com/elastic/kibana/pull/5280.
You can still see the full JSON document if you go to the JSON tab. I have been using this in some use cases already and it's working fine. But this is just a workaround when you know which is a very large field. The problem will still persist if the document itself if huge (large number of fields). But worth trying out this as a simple workaround.
There's this: https://github.com/elastic/kibana/pull/7402
Most helpful comment
hi guys we have developed a quick patch for this issue allowing to specify the source filtering patters (and then showing checkboxes for the fields which are excluded or included) it looks like this:
we could provide it if you like (though we really did a quick thing, and editing patterns by end may not be everyone's cup of tea)