Kibana: Uncaught TypeError: Cannot read property 'call' of undefined

Created on 13 Jun 2016  路  21Comments  路  Source: elastic/kibana

Kibana version: 5.0.0-alpha3

OS version: Mac OS X

Original install method (e.g. download page, yum, from source, etc.): manual installation

Description of the problem including expected versus actual behavior:

I'm restarting my elastic stack (was working last week):

  • Start elasticsearch
  • Start kibana
  • Open http://0.0.0.0:5601/app/kibana

And get:

kibana google chrome aujourd hui at 08 42 53

Errors in browser console (if relevant):

Uncaught TypeError: Cannot read property 'call' of undefined (http://0.0.0.0:5601/bundles/commons.bundle.js?v=12439:1)
Version: 5.0.0-alpha3
Build: 12439
Error: Uncaught TypeError: Cannot read property 'call' of undefined (http://0.0.0.0:5601/bundles/commons.bundle.js?v=12439:1)
    at window.onerror (http://0.0.0.0:5601/bundles/commons.bundle.js?v=12439:62:21502)

Provide logs and/or server output (if relevant):

Kibana logs:

$ bin/kibana
  log   [08:33:23.523] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [08:33:23.543] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [08:33:24.977] [info][status][plugin:timelion] Status changed from uninitialized to green - Ready
  log   [08:33:24.989] [info][status][plugin:console] Status changed from uninitialized to green - Ready
  log   [08:33:24.992] [info][status][plugin:kbn_doc_views] Status changed from uninitialized to green - Ready
  log   [08:33:24.997] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [08:33:25.002] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [08:33:25.005] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [08:33:25.009] [info][status][plugin:spy_modes] Status changed from uninitialized to green - Ready
  log   [08:33:25.015] [info][status][plugin:status_page] Status changed from uninitialized to green - Ready
  log   [08:33:25.020] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [08:33:25.023] [info][listening] Server running at http://0.0.0.0:5601
  log   [08:33:25.091] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready

Elasticsearch logs:

$ bin/elasticsearch
[2016-06-13 08:33:11,160][INFO ][node                     ] [U-Man] version[5.0.0-alpha3], pid[24304], build[cad959b/2016-05-26T08:25:57.564Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_60/25.60-b23]
[2016-06-13 08:33:11,163][INFO ][node                     ] [U-Man] initializing ...
[2016-06-13 08:33:11,863][INFO ][plugins                  ] [U-Man] modules [percolator, lang-mustache, lang-painless, ingest-grok, reindex, lang-expression, lang-groovy], plugins []
[2016-06-13 08:33:11,912][INFO ][env                      ] [U-Man] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [50.5gb], net total_space [464.7gb], spins? [unknown], types [hfs]
[2016-06-13 08:33:11,912][INFO ][env                      ] [U-Man] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-06-13 08:33:14,901][INFO ][node                     ] [U-Man] initialized
[2016-06-13 08:33:14,901][INFO ][node                     ] [U-Man] starting ...
[2016-06-13 08:33:14,993][INFO ][transport                ] [U-Man] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[2016-06-13 08:33:14,997][WARN ][bootstrap                ] [U-Man] initial heap size [268435456] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2016-06-13 08:33:14,997][WARN ][bootstrap                ] [U-Man] please set [discovery.zen.minimum_master_nodes] to a majority of the number of master eligible nodes in your cluster
[2016-06-13 08:33:18,047][INFO ][cluster.service          ] [U-Man] new_master {U-Man}{4_dPB8CNS7ysx0SggRcV4Q}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-06-13 08:33:18,074][INFO ][http                     ] [U-Man] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[2016-06-13 08:33:18,074][INFO ][node                     ] [U-Man] started
[2016-06-13 08:33:18,418][INFO ][gateway                  ] [U-Man] recovered [3] indices into cluster_state
[2016-06-13 08:33:18,898][INFO ][cluster.routing.allocation] [U-Man] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
Operations

Most helpful comment

Now I'm experiencing the same issue on 7.8.0

All 21 comments

Opening Kibana in an incognito window works fine.
So probably something in cache...

That being said I tried to reload using CMD+SHIFT+R but it did not fix my issue.

Cleaning the browser history manually fixed it. Not sure what happened though.

Feel free to close it if we can't really fix that...
May be we could add another text close to "clear your session", something like "or remove your browser cache".

Hmm... this is really strange. We even cache bust based on the current version of Kibana, so I can't even blame an upgrade for that.

Did you install or remove any plugins?

Also the JS bundles are sent with cache-control: no-cache so they shouldn't be an issue. Something else that incognito removes then, a cookie or localstorage perhaps? Was there a longer stack trace for any of those errors?

Probably that could be the reason.

I'm switching all the time from one demo to another:

  • one Kibana server running locally with no plugin
  • the other one with X-pack

I found that instead of opening 0.0.0.0 if I open 127.0.0.1, I don't see this issue,

Probably something is stored in the browser associated with IP 0.0.0.0. When I open the instance without any plugin, it fails.

Assuming your demos are using the same version of Kibana, then that's almost what is causing this issue. We cache bust based on the build number of Kibana, but we don't take into account what plugins the user has installed.

In the long term, our moving away from the optimizer in releases will make this a non-issue. In the short term, Kibana should be updated to create a hash of the bundles and use that for cache busting instead of the build number.

Assuming your demos are using the same version of Kibana

Yes they are. That's definitely the issue. Thanks!

@epixa @dadoonet I don't see how that could be the cause considering the JS bundles never get cached in the browser.

You have more faith in browsers than I do.

I found this can happen if you have say Kabana v1.5 speaking with Elasticsearch1.4

I don't think I hit this one recently anymore. May be we can close it then?

@dadoonet Works for me. Thanks for updating this

This has come up again a few more times since. This is usually related to the bundle process having an error. I'm going to reopen so to see if I get/add more details, and options for fixing.

We have experienced the same issue with kibana 6.5.3

Same here, we still experience these issues with Kibana 6.4 Is there any fix for this ?

Same on kibana 7.2

Also _still_ experiencing this issue in 2020 across multiple ES versions. We've been seeing this error from 7.2 all the way to 7.6. We also can't consistently reproduce the error. Running multiple kibana's is crucial for HA in our environment.

I have this issue as well. Any way we can help debug?

Same on 7.8.0

Edit: We run multiple Kibana instances on the same domain in subdirectories.
mon.example.com/logging1/ --> Kibana 7.8.0
mon.example.com/logging2/ --> Kibana 7.6.1

Now I'm experiencing the same issue on 7.8.0

+1. Getting same on 7.8.0

Was this page helpful?
0 / 5 - 0 ratings