Hi All,
I use nxlog to collect Windows performance stats and store them in Elasticsearch and have a dashboard with 14 visualizations to show all different stats over time such as ASP.NET requests, processor time, available memory, disk read/write, etc.
There are two sources of perfmon, A includes 5 servers and B includes 18 servers. Perfmon data are stored into two ES clusters on servers with lots of CPU, RAM, and SSD. The same Kibana dashboard (via export and import) for A and B are on separate Kibana instances.
My issue is when I use Google Chrome to view the dashboard
With perfmon dashboard for A (5 servers), the Chrome tab uses around 200MB of memory.
With perfmon dashboard for B (18 servers), the Chrome tab uses up to 1GB of memory and is usually very slow to unusable point.
When I try to open the perfmon dashboard for B in Firefox, Firefox will hang because of unresponsive script
Script: http://server:port/bundles/kibana.bundle.js?v=9689:3698
Error: [$rootScope:inprog] $digest already in progress
http://errors.angularjs.org/1.4.7/$rootScope/inprog?p0=%24digest (http://server:port/bundles/commons.bundle.js?v=9689:27679)
Version: 4.4.0
Build: 9689
Error: Error: [$rootScope:inprog] $digest already in progress
http://errors.angularjs.org/1.4.7/$rootScope/inprog?p0=%24digest (http://server:port/bundles/commons.bundle.js?v=9689:27679)
__WEBPACK_AMD_DEFINE_RESULT__</window.onerror@http://server:port/bundles/commons.bundle.js?v=9689:63832:25
Perfmon index is pretty small, weekly index is about 1 GB. The number of documents retrieved from ES is about 12K for last 1 hour.
I also have big indexes ranging from 10 to 100 GB daily indexes and Kibana dashboards for them work fine on the same Kibana instance that I have trouble viewing the perfmon dashboard.
I wonder if anyone knows possible causes or ways to find out the causes of the issue.
ES: 2.2.0
Kibaba: 4.4.0
All visualizations have similar settings
I have about 13 visualizations in this dashboard
Looks like this also happens with Kibana 4.5.0 and ES 2.3.0 but not as bad as Kibana 4.4.0
hi @anhlqn, could you post this to our discussion forum (https://discuss.elastic.co/c/kibana)? We like to reserve Github issues for confirmed bugs and feature requests.
Yeah, I originally posted in forum https://discuss.elastic.co/t/web-browser-high-memory-usage-for-kibana-dashboard/51992, but @tylersmalley asked me to create an issue here.
Ah sorry you're getting bounced around. In that case let's just handle it here. If you look at the size of the aggregation responses from Elasticsearch in your browser's devtools, how do they compare between the perfmon dashboard and the other dashboards that work fine? Also what date ranges are you using? How big is each individual document? While the other indices may have more data, the date range, the doc size, and the types of aggregations you're using can have a big impact on how much data is loaded.
@Bargs and @epixa I think I've included all necessary info in my first post. A few more details
The dashboard is set to look at last 1 hour data which contains very few documents with stats as below
The dashboard includes 14 visualizations which use the same configuration as in first post (Date Histogram > Split Lines > Sub Aggregation Term
If I click the up arrow at the bottom of each visualization to hide the lines graphed, Chrome's memory usage will drop to normal, so I think that it's the line drawing part that consumes lots of memory and cause slowness to unusable point, which I am not sure if it's a server or client side problem (not a dev :( ). I have other dashboards with same number of visualization with same configuration, but the number of terms (or lines) to be visualized is under 10 for each. As soon as more than 10 terms/lines show up, things start to slow down and lag.
Are there any particularly large fields included in any of the visualizations?
No, they are just number fields
Ok, thanks for all the extra info @anhlqn. I think this one is going to take some time to investigate. Unfortunately I don't have many spare cycles at the moment, so I'll leave it in the queue so anyone can pick it up when they have some time.
Legit...I see this as well with the beta:
What other info can I provide? Thanks.
The reported problem here still exist in Kibana 4.6.1.
It`s look like memory leak caused by jQuery data_priv.cache, which stores event handlers for DOM elements. When DOM elements deleted "behind" jQuery back such objects stay in cache forever.
All "circle" elements in line_chart.js contains events for tooltips, created in
https://github.com/elastic/kibana/blob/v5.0.0/src/ui/public/vislib/visualizations/line_chart.js#L173
After tooltip.destroy events are cleared by jQuery "off" call (https://github.com/elastic/kibana/blob/v5.0.0/src/ui/public/binder/binder.js#L19), but object stays in cache and cache grows infinitely. Line chart with lot of lines and "circle" elements causes memory to leak very rapidly.
As intermediate measure, memory leak can be alleviated by switching off checkbox "Show tooltip" in visualization view options. Other elements use much less memory.
Possible fix for memory leak - add call to jQuery "cleanData" to destroy method, https://github.com/elastic/kibana/blob/v5.0.0/src/ui/public/vislib/visualizations/_chart.js#L82
But in newer jQuery versions element event/user data storage moved from cache to element property (https://github.com/jquery/jquery/commit/7b2f017b69651878890b228193902150ea8700d9) and it`s also supposed to solve problems like this.
@metametaclass Thanks for the info!
We are using the latest Kibana version 5.1.2.
and when we run the dashboard overnight it crashes the Chrome tab (latest Chrome) so that it just shows a white screen. I will try to get more info and post it here
+1 here
Dashboard autorefresh in Kibana version 5.1.2 crash chrome v55.2883.87
The dashboard is complicated. Don't know how to debug...
This is still an Open issue. We still face OOM in our Browser. After profiling it in Chrome, I agree with @metametaclass
Is there any plans to fix this issue in any upcoming versions?
@AnkurThakur what version of Kibana are you using?
@thomasneirynck We are using 5.2.2 version of the stack here. It seems like this issue was there since a long time.
Seems to be a bit better but it still happens on 5.3.2:
+1 for a solution within Kibana, we are seeing this issue as well, Kibana 5.5.0
We have several monitors on the NOC wall where Kibana dashboards run all day. They will crash if we don't setup an iFrame with an auto refresh of 2-4 hours depending on the dashboard. I have users that have seen this issue on their laptops that have even cause machine reboots. Thanks!
The tooltip leak is still an unresolved issue, but we may have found another memory leak which will hopefully help resolve this once fixed: https://github.com/elastic/kibana/issues/13458
In order to keep these issues focused, I split out the tooltip one -https://github.com/elastic/kibana/issues/14058 . The other issue with a dashboard on auto refresh causing OOM errors after awhile should be fixed with #13458.
Closing this as it appears that the tooltip leak can't be reproduced, and the error handlers leak, on auto refreshed dashboards, is now fixed.
Please feel free to reopen this ticket, #14058, or open a new one if you are still seeing issues.
Most helpful comment
It`s look like memory leak caused by jQuery data_priv.cache, which stores event handlers for DOM elements. When DOM elements deleted "behind" jQuery back such objects stay in cache forever.
All "circle" elements in line_chart.js contains events for tooltips, created in
https://github.com/elastic/kibana/blob/v5.0.0/src/ui/public/vislib/visualizations/line_chart.js#L173
After tooltip.destroy events are cleared by jQuery "off" call (https://github.com/elastic/kibana/blob/v5.0.0/src/ui/public/binder/binder.js#L19), but object stays in cache and cache grows infinitely. Line chart with lot of lines and "circle" elements causes memory to leak very rapidly.
As intermediate measure, memory leak can be alleviated by switching off checkbox "Show tooltip" in visualization view options. Other elements use much less memory.
Possible fix for memory leak - add call to jQuery "cleanData" to destroy method, https://github.com/elastic/kibana/blob/v5.0.0/src/ui/public/vislib/visualizations/_chart.js#L82
But in newer jQuery versions element event/user data storage moved from cache to element property (https://github.com/jquery/jquery/commit/7b2f017b69651878890b228193902150ea8700d9) and it`s also supposed to solve problems like this.