Kibana: ERR_TOO_MANY_REDIRECTS

Created on 27 Nov 2016  ·  43Comments  ·  Source: elastic/kibana

Kibana version:5

Elasticsearch version:5

Server OS version:debian

Browser version:wget

Browser OS version:

Original install method (e.g. download page, yum, from source, etc.):

Description of the problem including expected versus actual behavior:

Hi, if I try to connect to kibana with wget to localhost:5601, or from a browser I get the same problem:

Provide logs and/or server output (if relevant):
matarrese@instance-2-elasticsearch:~$ wget http://localhost:5601/
--2016-11-27 21:21:43-- http://localhost:5601/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5601... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5601... connected.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
--2016-11-27 21:21:43-- http://localhost:5601/login?next=%2F
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2016-11-27 21:21:43-- http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /login?next=%2F [following]
20 redirections exceeded.

Security bug

Most helpful comment

Hello,
my issue is fixed.
This issue is come after an x-pack installation, I have change the default password for elasticsearch user (using the kibana, x-pack module).
But after that I had mistakes ERR_TOO_MANY_REDIRECTS
It is necessary to update the kibana.yml config file (elasticsearch.username and elasticsearch.password to match with the new configuration).

Why do not show error message when credentials are not correct ?

All 43 comments

The problem is gone uninstalling x-pack .
Am I missing any configuration?

Looks that x-pack was not properly installed on elasticsearch too.
After installing it, everything looks fine.

Thanks for submitting. Yes, you are right, XPack needs to be installed on both Kibana and ES.

Hi,
I cannot solve this....exactly the same issue... but my setup is:

nginx-->kibana(over docker swarm)-->elasticsearch (over docker swarm)

here is my vhost:

`
server {
server_name logs.mydomain.com;
listen 80 ;

    access_log /logs/logging-kibana.access.log;
    error_log /logs/logging-kibana.error.log;

location / {
    proxy_pass http://loggingkibana:5601;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

}
`
I've already tried with removing or installing x-pack on both containers, but since I am not doing a restart I am not sure the conf it's being applied correctly.

Btw, those are the docker images that I am using:

docker.elastic.co/kibana/kibana:5.1.1
elasticsearch

Installing in a Debian from Elastic Repos with latest version happens the same.

This is kibana log:
GET / HTTP/1.1 Host: 63.251.150.176:5601 Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/55.0.2883.87 Chrome/55.0.2883.87 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8,es-AR;q=0.6,es;q=0.4 HTTP/1.1 302 Found location: /login?next=%2F kbn-name: kibana kbn-version: 5.1.2 kbn-xpack-sig: d25e14f475188898cbf3d8c390e464cc cache-control: no-cache content-length: 0 vary: accept-encoding Date: Mon, 23 Jan 2017 16:48:10 GMT Connection: keep-alive (f2@ GET /login?next=%2F HTTP/1.1 Host: 63.251.150.176:5601 Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/55.0.2883.87 Chrome/55.0.2883.87 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8,es-AR;q=0.6,es;q=0.4 HTTP/1.1 302 Found location: / kbn-name: kibana kbn-version: 5.1.2 kbn-xpack-sig: d25e14f475188898cbf3d8c390e464cc cache-control: no-cache content-length: 0 vary: accept-encoding Date: Mon, 23 Jan 2017 16:48:10 GMT Connection: keep-alive GET / HTTP/1.1 Host: 63.251.150.176:5601 Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/55.0.2883.87 Chrome/55.0.2883.87 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8,es-AR;q=0.6,es;q=0.4 HTTP/1.1 302 Found location: /login?next=%2F kbn-name: kibana kbn-version: 5.1.2 kbn-xpack-sig: d25e14f475188898cbf3d8c390e464cc cache-control: no-cache content-length: 0 vary: accept-encoding Date: Mon, 23 Jan 2017 16:48:10 GMT Connection: keep-alive GET /login?next=%2F HTTP/1.1 Host: 63.251.150.176:5601 Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/55.0.2883.87 Chrome/55.0.2883.87 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8,es-AR;q=0.6,es;q=0.4 HTTP/1.1 302 Found location: / kbn-name: kibana kbn-version: 5.1.2 kbn-xpack-sig: d25e14f475188898cbf3d8c390e464cc cache-control: no-cache content-length: 0

Having the same issue. I have x-pack installed on both Kibana and ES. If I remove the x-pack it works fine.

I found this thread when having this issue and then I remembered the fix ( for my system anyway ). I am running all this in docker container so, again, perhaps completely irrelevant but I need to increase the
max_map_count on my docker host.
sudo sysctl -w vm.max_map_count=262144
hope this helps someone.
r

@matarrese or anyone else
1)is Elasticsearch actually up and running when this happens?
2)is x-pack is also installed there?
3)can you start kibana with --verbose flag and post the log here while issuing that request?

Was running into something similar. It turned out our problem had to do with not using the correct user as the configuration file was read from /etc/kibana/kibana.yml, but we put the config file on /opt/kibana/config/kibana.yml.

Thanks @bbenning for the hint. I think i got it solved on our server. Kibana is running in Docker behind a nginx-reverse proxy in this scenario:

  • First login with default credentials (elastic/changeme) works.
  • Change the password & logout
  • Login screen hangs in redirect-loop

My "solution" was to mount the config file

volumes:
      - "./kibana.yml:/usr/share/kibana/config/kibana.yml"

and to comment out the default user credentials:

#elasticsearch.username: elastic
#elasticsearch.password: changeme

And the redirect loop was gone.

@Nebel54 can you please share your nginx config and your kibana.yml if that is ok with you? Thanks!
cc @LeeDr ^

Hello,
I have the same problem after x-pack installation.

My setup is:
apache_proxy > kibana > elasticsearch

apache and kibana are on the same server but elasticsearch is on another server.
kibana and elasticsearch are in 5.4.1

After the x-pack installation, I got an ERR_TOO_MANY_REDIRECTS. error message.

Here, you can find my apache config:

<VirtualHost *:5602>
    ErrorLog ${APACHE_LOG_DIR}/kibana-error.log
    CustomLog ${APACHE_LOG_DIR}/kibana-access.log combined

    ProxyPreserveHost On
    ProxyRequests Off
    RequestHeader unset Authorization

    ProxyPass / http://127.0.0.1:5601/
    ProxyPassReverse / http://127.0.0.1:5601/

</VirtualHost>

My /etc/kibana/kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://second_server:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana-4"

# The default application to load.
#kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

Do you know where is the problem ?

Hello,
my issue is fixed.
This issue is come after an x-pack installation, I have change the default password for elasticsearch user (using the kibana, x-pack module).
But after that I had mistakes ERR_TOO_MANY_REDIRECTS
It is necessary to update the kibana.yml config file (elasticsearch.username and elasticsearch.password to match with the new configuration).

Why do not show error message when credentials are not correct ?

This has been resolved as of 5.6.0

Just hit this with 6.1.1.

Too many redirects occurred trying to open “https://kibana.local:5601”. This might occur if you open a page that is redirected to open another page which then is redirected to open the original page.
Was broken in both Chrome Version 63.0.3239.84 (Official Build) (64-bit) and Safari Version 11.0.1 (13604.3.5).

Incognito worked.

Had to remove the cookie / SID from Chrome. Haven't fixed Safari, because it appears I can't clear the cookies just for the specific domain.

And 6.2.2
kibana_err_too_many_redirects

I hit this issue nearly once a week now, would be nice if this could get some attention. Deleting the SID cookie clears it up immediately. I feel bad for any user that has to discover this on their own.

Same, this issue is very common for us, and clearing the SID cookie solves the issue. Not sure if it's relevant, but we use both a SAML realm hooked up to Okta (for engineers) and a native realm (for services) to authenticate.

For the SAML realm, we don't have Single Logout configured, and I wonder if that's maybe causing it, but I haven't tried it yet. That might be the next thing I try.

I'm seeing the same thing here with SAML (Okta) setup with x-pack - every week or two, I'll need to delete the cookie for the kibana instance and then it works again. Anyone aware of a more permanent fix?

Any plans on fixing this issue? It happens 100% of time time I sign in to Kibana via Okta, and it's frustrating to have to clear the cookies and open a new tab.

Liebe Kunden, Freunde und Spammer,
wir sind derzeit im Urlaub.
Ab Montag, 7.1.2019 sind wir wieder wie gewohnt für euch/Sie da.

Wir wünschen Ihnen einen guten Start ins neue Jahr!

Liebe Grüße
Oliver Köhler

@elastic/kibana-security

Anyone experiencing this with SAML, if you could share your kibana logs with logging.verbose: true when this occurs, it'll help us diagnose the issue further.

Steps to reproduce:

  1. Sign into Kibana with Okta
  2. Close the tab
  3. Sign into Kibana with Okta

It seems like the Kibana authentication logic cannot handle the case when there is already an existing session for a user.

Hey @mgartner, would you be able to set logging.verbose: true in your kibana.yml and provide us logs from when this is occurring?

@kobelb we're experiencing this with SAML running on v6.3.1 (I work with @mgartner). I'll work on getting some logs with logging.verbose: true for you by early next week. I've also tried debugging this issue a few months ago and here are some things that I found.

On the ElasticSearch side of things. The documents used to store the sessions in the .security index are being marked as inactive and later deleted by the ExpiredTokenRemover. When a request from Kibana is made to ES after the document has been removed, ES is unable to find the document associated with the sid cookie and ends up erring out (internal server error) on this line. Kibana therefore receives an HTTP 500 response and because it's a failed HTTP 500 request (and not HTTP 401) doesn't clear the cookie here.

The issue above results in Kibana not clearing an invalid sid cookie. Because you still have a sid cookie (although it's invalid) any Kibana redirects to /login will redirect back to / because there is a present sid cookie set. Authentication at / will fail so you're taken back to /login where the cycle starts again.

@kobelb what's a good email to send you the logs?

@dmlittle can you could upload them as a gist?

@kobelb here are the Kibana when this issue occurred. I've included 2min before and after the issue happened.

https://gist.githubusercontent.com/dmlittle/e07fe1afcc3c583ae31d2a89c080490d/raw/720a96597974fed78dc445560560ecaed9278363/logs.txt

@kobelb any updates?

@dmlittle would you mind confirming that if the user clears their cookies, and tries to login again that they're able to successfully login?

I'm seeing the following error in your Kibana logs:

[illegal_state_exception] token document is missing and must be present

This is occurring because we're trying to use the access token stored in the user's session (persisted using a cookie) to authenticate the user, and we're getting back an unexpected error, so we aren't proceeding to use the SAML payload provided by the IdP to authenticate the end-user.

@kobelb yes clearing the sid allows you to login again (since you're not redirected due to the presence of the sid cookie)

Thanks for the confirmation @dmlittle, i'll work on resolving the issue that you're seeing.

@dmlittle there's already an issue for the underlying issue that you're seeing here: https://github.com/elastic/kibana/issues/22905

Prior to Kibana 6.6, it can manifest as that ERR_TOO_MANY_REDIRECTS issue, but 6.6+ it will display an internal server error message to the user.

@kobelb I just hit this with 7.0.0-rc2-37e4e7a4. Single kibana going to single elasticsearch, native realm only. I do set the xpack.security.encryptionKey to the same value when moving between versions. I had 6.7.0 @ https://kibana.local, then removed that and started a fresh 7.0.0-rc2 (using the same settings). Upon loading, I encountered the ERR_TOO_MANY_REDIRECTS, and had to clear my SID cookie.

@jpcarey if you could share your Kibana logs with logging.verbose: true and a HAR, it'll help us diagnose further.

If running into this check your server.basePath settings
as an example I found that:
/etc/kibana/kibana.yml

server.basePath: /foo
server.rewriteBasePath: true

led to

curl -L -k https://localhost:5601
<404 Not Found>

but …

curl -L -k https://localhost:5601/foo
< Kibana :D >

server.basePath is necessary if you are pointing to kibana behind a proxy path.

I had nginx redirecting /foo to https://kibana-server.com:5601/app/kibana
which was working fine with the following nginx config back in kibana 6

location /foo {
    rewrite ^/foo/(.*)$ /$1 break;
    proxy_pass https://kibana-server.com:5601/app/kibana;

server.rewriteBasePath defaults to true in 7 (which, in fairness, is well documented and phased in slowly over two major versions!) so that setup "broke". So now I do:

location /foo {
    proxy_pass https://kibana-server.com:5601;

in nginx

So Basically, Kibana is doing the rewriting for nginx now, which simplifies the nginx config quite a bit.

ERR_TOO_MANY_REDIRECTS happened when I tried to set server.rewriteBasePath: false (like it used to be in 6) which is why I mention all this. I think it may also have happened when I had rewrite true but the old nginx config was also rewriting.

This issue hasn't had any activity in quite some time, so I'm looking for feedback - has anyone encountered this in a _recent_ version of Kibana? Maybe anything since 7.5.0? We've made a number of improvements to session handling in the 7.x timeframe, so it's possible this is no longer a problem

@legrego we're still on 7.3.1 but the issue seems to be resolved on this version

I'm going to consider this resolved since we've had no recent reports or feedback since we've improved the authentication flow. If anyone is still experiencing this, please do not hesitate to open a new issue with your specific error scenario. Thanks all for your patience and help as we worked through this!

I'm suddenly experiencing this issue.. with 7.4.2 :(

restarting fixes it, but the error comes back after a while


wget http://localhost:5601/                                                                                                                                           112339ms
--2020-07-29 11:55:10--  http://localhost:5601/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5601... connected.
HTTP request sent, awaiting response... 302 Found
Location: /app/kibana [following]
--2020-07-29 11:55:10--  http://localhost:5601/app/kibana
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2020-07-29 11:55:11--  http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: /app/kibana [following]
--2020-07-29 11:55:11--  http://localhost:5601/app/kibana
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2020-07-29 11:55:11--  http://localhost:5601/
Reusing existing connection to localhost:5601.
HTTP request sent, awaiting response... 302 Found

Is there a configuration fix for this?

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: "/kibana"
server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
#elasticsearch.url: "http://"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
# kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 120000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
elasticsearch.requestHeadersWhitelist: [ authorization, "X-Forwarded-User",  "x-se-fire-department-all" ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

# For X-Pack users: you may only leave monitoring on.
# Don't add this if X-Pack is not installed at all
xpack.security.enabled: false

#elasticsearch.logQueries: true
#readonlyrest_kbn.whitelistedPaths: [".*/api/status$"]

kibana.defaultAppId: 'dashboard'
server.defaultRoute: '/app/kibana#/dashboards?_g=()'
Was this page helpful?
0 / 5 - 0 ratings