Kibana: CSV report through watcher use wrong CSV separator 50% of execution whenever a second kibana instance is running

Created on 21 Mar 2019  路  5Comments  路  Source: elastic/kibana

Kibana version:
6.6.2
Elasticsearch version:
6.6.2
Server OS version:
Centos
Browser version:
Chrome Version 73.0.3683.75 (Official Build) (64-bit)
Browser OS version:
Mac OS 10.14.3
Original install method (e.g. download page, yum, from source, etc.):
docker or download page
Describe the bug:
Running a report through watcher fails to use the right CSV separator 50% of the executions when 2 kibana instances are connected to the elasticsearch cluster with a different kibana.index

Steps to reproduce:
testwatchercsvseparator.zip

1- On a machine with docker installed, execute "./runAll.sh" to start elasticsearch cluster with Trial license, and kibana0 and kibana1 using different index (we will only use kibana0 here which is mapped to 5601 on localhost/127.0.0.1 - kibana1 is mapped on 5602 and won't be used), along with a fake SMTP server
2- Go to http://localhost:5601 and connect with elastic:changeme
3- Go to Management / advanced - change CSV separator from "," to ";" and enable "Store URLs in session storage"
4- Add an index pattern ".monitoring-es" with time field "timestamp"
5- Go to Discover and add field "cluster_name", save search as "test"
6- Click Share / CSV Report / Copy Post URL (Edit - saving the report as CSV without using watcher also reproduced CSV using wrong separator 50% of the executions)
7- Go to Management / Watcher / Create advanced watch and paste URL with this watcher definition:

{
  "trigger" : {
    "schedule": {
      "interval": "1m"
    }
  },
  "actions" : {
    "email_admin" : { 
      "email": {
        "to": "'Recipient Name <[email protected]>'",
        "subject": "testing emailing pdf",
        "attachments" : {
          "report.pdf" : {
            "reporting" : {
              "url": "http://kibana0:5601/<<-replace URL keeping kibana0 and not localhost>", 
              "retries":1, 
              "interval":"30s", 
              "auth":{ 
                "basic":{
                  "username":"elastic",
                  "password":"changeme"
                }
              }
            }
          }
        }
      }
    }
  }
}

8- Go to Management / Reporting after 5 minutes and observe:
1st/3rd... report has CSV separator : ";"
2nd/4th... report has CSV separator : "," - Unexpected

9- Run "docker-compose stop kibana1" -> Now all subsequent report execution have CSV separator : ";"

10- Run "docker-compose start kibana1" -> now half the report executed through watcher have the wrong CSV separator

Expected behavior:
watcher CSV report to all use the same CSV separator - regardless of another kibana instance running on separate kibana.index
Any additional context:
Side note, if you comment out "XPACK_REPORTING_ENCRYPTIONKEY" in docker-compose and run "docker-compose up -d kibana0 kibana1", now most report execution (not all) fail with
"Error: Failed to decrypt report job data. Please ensure that xpack.reporting.encryptionKey is set and re-generate this report." although only kibana0 is used for all report executions

Reporting bug

All 5 comments

I have this issue, and would greatly appreciate the fix!

By the way, another way to reproduce the issue, without using Watcher:

  • go to Discover app
  • open a saved search
  • click on Share > CSV Reports > Generate CSV
    => Half of the time, you will get the wrong CSV format ("," instead of ";" as column separator)
    To be sure of the result, I do 10 downloads sequentially, and I check that all of them use ";" as column separator.

@jguay
Another precision: if my second Kibana instance has the same kibana.index setting than the first Kibana instance (high availability use case), I don't reproduce the issue.

This problem can be explained taking a different example problem that can arise.

If there are 2 instances of Kibana in the cluster, each one using a different kibana.index, the instances become aware of a different set of saved objects and advanced settings. It is dangerous for both instances of Kibana to use the same xpack.reporting.index setting. That allows an instance to pick up jobs that were queued by the other instance, hence the 50% rate.

The typical problem that would happen here is usually worse: if a Kibana instance picks up a reporting job queued by another instance, the job metadata will have references to saved objects that only the other instance knows about. This gets Reporting to open a page that has a "saved object not found" error.

On the other hand, if ID values actually match in the 2 different kibana.index indices in ES (that would be if the saved objects were copied from one index to another), you won't get the "saved object not found" error, but advanced settings could be different.

The documentation for Reporting makes it pretty clear not to set up this dangerous configuration: https://www.elastic.co/guide/en/kibana/current/_reporting_indices_for_multiple_kibana_workspaces.html

Thanks for the explanation and the documentation link

Was this page helpful?
0 / 5 - 0 ratings

Related issues

stormpython picture stormpython  路  74Comments

seti123 picture seti123  路  100Comments

cff3 picture cff3  路  83Comments

srl295 picture srl295  路  104Comments

TiNico22 picture TiNico22  路  87Comments