Kibana: Painless script error in elasticsearch log related to xpack.default_admin_email ?

Created on 2 May 2018  路  14Comments  路  Source: elastic/kibana

Kibana version: 6.3.0-SNAPSHOT

Elasticsearch version: 6.3.0-SNAPSHOT

Server OS version: Windows 2012 Server

Browser version: IE 11

Browser OS version: Windows 10

Original install method (e.g. download page, yum, from source, etc.): zip files

Description of the problem including expected versus actual behavior:
This might be totally harmless? Maybe it's user error on my configuration?

I searched my Kibana and Elasticsearch logs for errors and found this in the elasticsearch.log;

[2018-05-02T17:33:27,223][ERROR][o.e.x.w.t.s.ExecutableScriptTransform] [XCD5-3x] failed to execute [script] transform for [XQLeIk9fQV-Xj4bFglIZlw_elasticsearch_cluster_status_5e55d2df-60d8-4ed3-8cd2-d666147dae88-2018-05-02T17:33:27.194Z]
org.elasticsearch.script.ScriptException: runtime error
    at org.elasticsearch.painless.PainlessScript.convertToScriptException(PainlessScript.java:94) ~[?:?]
    at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:1070) ~[?:?]
    at org.elasticsearch.painless.ScriptImpl.run(ScriptImpl.java:105) ~[?:?]

I don't know the exact cause or impact of this error.
In my Kibana Watches page I have 1 watch firing which is X-Pack Monitoring Cluster Status which appears to be the source of the error above (elasticsearch_cluster_status)

In my Elasticsearch config I have;

network.host: 0.0.0.0
xpack.license.self_generated.type: trial
discovery.zen.minimum_master_nodes: 0
xpack.notification.email.account:
  gmail_account:
    profile: gmail
    smtp:
      auth: true
      starttls.enable: true
      host: <my host here>
      port: 587
      user: <user removed from here>
      password: <secret password here>
xpack.security.transport.ssl.enabled: true

xpack.ssl.certificate: elasticsearch.crt
xpack.ssl.key: elasticsearch.key
xpack.ssl.certificate_authorities: ["ca.crt"]
xpack.security.http.ssl.enabled: true
path.logs: C:\windowsInstalls\6.3.0\elasticsearch\logs\

My kibana.yml contains;

elasticsearch.url: https://elastic:changeit@localhost:9200
logging.verbose: true
server.ssl.enabled: true
elasticsearch.username: kibana
elasticsearch.password: mypasswordhere
xpack.reporting.encryptionKey: ThisIsReportingEncryptionKey1234
xpack.security.encryptionKey: ThisIsSecurityEncryptionKey12345
xpack.reporting.queue.timeout: 60000
xpack.reporting.capture.browser.type: chromium
server.ssl.certificate: ./config/kibana.crt
server.ssl.key: ./config/kibana.key
elasticsearch.ssl.certificateAuthorities: ["./config/ca.crt"]
server.host: 10.0.2.15

I did run an automated test which created a watch which successfully emailed a PDF visualization report to me using the xpack.notification.email.account: settings, so I know those work.

Steps to reproduce:

  1. set up single node Elasticsearch and Kibana default distributions
  2. Enable Monitoring via the Kibana UI
  3. check elasticsearch.log

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):
See elasticsearch.log error above

Watcher Monitoring bug

Most helpful comment

Thanks @kierenj. It looks like there are 2 watches in there and both were created in 6.2. So next, lets delete the watch causing the error and have X-Pack Monitoring recreate it. Here are the steps:

  1. Delete the offending watch:

    DELETE _xpack/watcher/watch/bg72CiWuSk6ntKxAZWimmw_elasticsearch_cluster_status
    
  2. Create a temporary local exporter to disable Cluster Alerts:

    PUT _cluster/settings
    {
     "transient": {
       "xpack.monitoring.exporters.my_temp_local": {
         "type": "local",
         "cluster_alerts.management.enabled": false
       }
     }
    }
    
  3. Delete the temporary local exporter. This will re-enable Cluster Alerts and should recreate the watch:

    PUT _cluster/settings
    {
     "transient": {
       "xpack.monitoring.exporters.my_temp_local.*": null
     }
    }
    
  4. Run the same query as before again to check if the watch was recreated:

    GET .watches/_search?q=metadata.name:cluster&filter_path=hits.hits._id,hits.hits._source.metadata
    
  5. Check that the error goes away from the ES logs.

All 14 comments

I'm also getting the same error. Below is the watch that is executed, and the error comes on the input "check" where hits is zero. There is a filter condition where it has now-2m, and it could be the cause where there is no entry. Irrespective of this, the painless script could be changed - as logs keep filling up with this error.

The settings as very similar to the original bug description. Content from watch history below for easier finding.

{
"_index": ".watcher-history-7-2018.05.11",
"_type": "doc",
"_id": "KS6N5xFvSry7vbpLABnqMg_elasticsearch_cluster_status_ec3ea4a5-aa76-4354-81bf-39543cb3c3df-2018-05-11T00:00:54.780Z",
"_score": 1,
"_source": {
"watch_id": "KS6N5xFvSry7vbpLABnqMg_elasticsearch_cluster_status",
"node": "WvUx9MxFQvi2aTLD4VNPig",
"state": "executed",
"status": {
"state": {
"active": true,
"timestamp": "2018-04-11T19:43:36.731Z"
},
"last_checked": "2018-05-11T00:00:54.780Z",
"last_met_condition": "2018-05-11T00:00:54.780Z",
"actions": {
"send_email_to_admin": {
"ack": {
"timestamp": "2018-04-11T19:43:36.731Z",
"state": "awaits_successful_execution"
}
},
"add_to_alerts_index": {
"ack": {
"timestamp": "2018-04-11T19:45:37.898Z",
"state": "ackable"
},
"last_execution": {
"timestamp": "2018-04-11T19:46:38.423Z",
"successful": true
},
"last_successful_execution": {
"timestamp": "2018-04-11T19:46:38.423Z",
"successful": true
}
}
},
"execution_state": "executed",
"version": -1
},
"trigger_event": {
"type": "schedule",
"triggered_time": "2018-05-11T00:00:54.780Z",
"schedule": {
"scheduled_time": "2018-05-11T00:00:54.419Z"
}
},
"input": {
"chain": {
"inputs": [
{
"check": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-es-"
],
"types": [],
"body": {
"size": 1,
"sort": [
{
"timestamp": {
"order": "desc"
}
}
],
"_source": [
"cluster_state.status"
],
"query": {
"bool": {
"filter": [
{
"term": {
"cluster_uuid": "{{ctx.metadata.xpack.cluster_uuid}}"
}
},
{
"bool": {
"should": [
{
"term": {
"_type": "cluster_state"
}
},
{
"term": {
"type": "cluster_stats"
}
}
]
}
},
{
"range": {
"timestamp": {
"gte": "now-2m"
}
}
}
]
}
}
}
}
}
}
},
{
"alert": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-alerts-6"
],
"types": [],
"body": {
"size": 1,
"terminate_after": 1,
"query": {
"bool": {
"filter": {
"term": {
"_id": "{{ctx.watch_id}}"
}
}
}
},
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
}
},
{
"kibana_settings": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-kibana-6-
"
],
"types": [],
"body": {
"size": 1,
"query": {
"bool": {
"filter": {
"term": {
"type": "kibana_settings"
}
}
}
},
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
}
}
]
}
},
"condition": {
"script": {
"source": "ctx.vars.fails_check = ctx.payload.check.hits.total != 0 && ctx.payload.check.hits.hits[0]._source.cluster_state.status != 'green';ctx.vars.not_resolved = ctx.payload.alert.hits.total == 1 && ctx.payload.alert.hits.hits[0]._source.resolved_timestamp == null;return ctx.vars.fails_check || ctx.vars.not_resolved",
"lang": "painless"
}
},
"metadata": {
"name": "X-Pack Monitoring: Cluster Status (KS6N5xFvSry7vbpLABnqMg)",
"xpack": {
"severity": 2100,
"cluster_uuid": "KS6N5xFvSry7vbpLABnqMg",
"version_created": 6020099,
"watch": "elasticsearch_cluster_status",
"link": "elasticsearch/indices",
"alert_index": ".monitoring-alerts-6",
"type": "monitoring"
}
},
"result": {
"execution_time": "2018-05-11T00:00:54.780Z",
"execution_duration": 35,
"input": {
"type": "chain",
"status": "success",
"payload": {
"alert": {
"_shards": {
"total": 1,
"failed": 0,
"successful": 1,
"skipped": 0
},
"hits": {
"hits": [
{
"_index": ".monitoring-alerts-6",
"_type": "doc",
"_source": {
"metadata": {
"severity": 2100,
"cluster_uuid": "KS6N5xFvSry7vbpLABnqMg",
"version_created": 6020099,
"watch": "elasticsearch_cluster_status",
"link": "elasticsearch/indices",
"alert_index": ".monitoring-alerts-6",
"type": "monitoring"
},
"update_timestamp": "2018-04-11T19:46:38.423Z",
"prefix": "Elasticsearch cluster status is red.",
"message": "Allocate missing primary shards and replica shards.",
"timestamp": "2018-04-11T19:45:37.898Z"
},
"_id": "KS6N5xFvSry7vbpLABnqMg_elasticsearch_cluster_status",
"sort": [
1523475937898
],
"_score": null
}
],
"total": 1,
"max_score": null
},
"took": 0,
"terminated_early": true,
"timed_out": false
},
"kibana_settings": {
"_shards": {
"total": 8,
"failed": 0,
"successful": 8,
"skipped": 0
},
"hits": {
"hits": [
{
"_index": ".monitoring-kibana-6-2018.05.07",
"_type": "doc",
"_source": {
"interval_ms": 10000,
"cluster_uuid": "DNhXvIvsQEOKT43c54fGsA",
"source_node": {
"transport_address": "<>:9300",
"ip": "<>",
"host": "<>",
"name": "XXXX",
"uuid": "Ishpz4TJRFmQnZohT29YIA",
"timestamp": "2018-05-07T09:25:41.118Z"
},
"kibana_settings": {
"kibana": {
"transport_address": "<>:5601",
"name": "XXXX",
"host": "XXXX",
"index": ".kibana",
"uuid": "58064ade-a869-4f43-8125-7b312bb9550c",
"version": "6.2.1",
"snapshot": false,
"status": "green"
},
"xpack": {
"default_admin_email": null
}
},
"type": "kibana_settings",
"timestamp": "2018-05-07T09:25:41.118Z"
},
"_id": "HLjrOWMBZk33cZyovpOE",
"sort": [
1525685141118
],
"_score": null
}
],
"total": 2,
"max_score": null
},
"took": 7,
"timed_out": false
},
"check": {
"_shards": {
"total": 8,
"failed": 0,
"successful": 8,
"skipped": 0
},
"hits": {
"hits": [],
"total": 0,
"max_score": null
},
"took": 17,
"timed_out": false
}
},
"chain": {
"check": {
"type": "search",
"status": "success",
"payload": {
"_shards": {
"total": 8,
"failed": 0,
"successful": 8,
"skipped": 0
},
"hits": {
"hits": [],
"total": 0,
"max_score": null
},
"took": 17,
"timed_out": false
},
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-es-"
],
"types": [],
"body": {
"size": 1,
"sort": [
{
"timestamp": {
"order": "desc"
}
}
],
"_source": [
"cluster_state.status"
],
"query": {
"bool": {
"filter": [
{
"term": {
"cluster_uuid": "KS6N5xFvSry7vbpLABnqMg"
}
},
{
"bool": {
"should": [
{
"term": {
"_type": "cluster_state"
}
},
{
"term": {
"type": "cluster_stats"
}
}
]
}
},
{
"range": {
"timestamp": {
"gte": "now-2m"
}
}
}
]
}
}
}
}
}
},
"alert": {
"type": "search",
"status": "success",
"payload": {
"_shards": {
"total": 1,
"failed": 0,
"successful": 1,
"skipped": 0
},
"hits": {
"hits": [
{
"_index": ".monitoring-alerts-6",
"_type": "doc",
"_source": {
"metadata": {
"severity": 2100,
"cluster_uuid": "KS6N5xFvSry7vbpLABnqMg",
"version_created": 6020099,
"watch": "elasticsearch_cluster_status",
"link": "elasticsearch/indices",
"alert_index": ".monitoring-alerts-6",
"type": "monitoring"
},
"update_timestamp": "2018-04-11T19:46:38.423Z",
"prefix": "Elasticsearch cluster status is red.",
"message": "Allocate missing primary shards and replica shards.",
"timestamp": "2018-04-11T19:45:37.898Z"
},
"_id": "KS6N5xFvSry7vbpLABnqMg_elasticsearch_cluster_status",
"sort": [
1523475937898
],
"_score": null
}
],
"total": 1,
"max_score": null
},
"took": 0,
"terminated_early": true,
"timed_out": false
},
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-alerts-6"
],
"types": [],
"body": {
"size": 1,
"terminate_after": 1,
"query": {
"bool": {
"filter": {
"term": {
"_id": "KS6N5xFvSry7vbpLABnqMg_elasticsearch_cluster_status"
}
}
}
},
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
},
"kibana_settings": {
"type": "search",
"status": "success",
"payload": {
"_shards": {
"total": 8,
"failed": 0,
"successful": 8,
"skipped": 0
},
"hits": {
"hits": [
{
"_index": ".monitoring-kibana-6-2018.05.07",
"_type": "doc",
"_source": {
"interval_ms": 10000,
"cluster_uuid": "DNhXvIvsQEOKT43c54fGsA",
"source_node": {
"transport_address": "<>:9300",
"ip": "<>",
"host": "<>",
"name": "XXXX",
"uuid": "Ishpz4TJRFmQnZohT29YIA",
"timestamp": "2018-05-07T09:25:41.118Z"
},
"kibana_settings": {
"kibana": {
"transport_address": "<>:5601",
"name": "XXXX",
"host": "XXXX",
"index": ".kibana",
"uuid": "58064ade-a869-4f43-8125-7b312bb9550c",
"version": "6.2.1",
"snapshot": false,
"status": "green"
},
"xpack": {
"default_admin_email": null
}
},
"type": "kibana_settings",
"timestamp": "2018-05-07T09:25:41.118Z"
},
"_id": "HLjrOWMBZk33cZyovpOE",
"sort": [
1525685141118
],
"_score": null
}
],
"total": 2,
"max_score": null
},
"took": 7,
"timed_out": false
},
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
".monitoring-kibana-6-
"
],
"types": [],
"body": {
"size": 1,
"query": {
"bool": {
"filter": {
"term": {
"type": "kibana_settings"
}
}
}
},
"sort": [
{
"timestamp": {
"order": "desc"
}
}
]
}
}
}
}
}
},
"condition": {
"type": "script",
"status": "success",
"met": true
},
"transform": {
"type": "script",
"status": "failure",
"reason": "runtime error",
"error": {
"root_cause": [
{
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"java.util.ArrayList.rangeCheck(ArrayList.java:657)",
"java.util.ArrayList.get(ArrayList.java:433)",
"state = ctx.payload.check.hits.hits[0]._source.cluster_state.status;",
" ^---- HERE"
],
"script": "ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolved = !ctx.vars.fails_check && ctx.vars.not_resolved;def state = ctx.payload.check.hits.hits[0]._source.cluster_state.status;if (ctx.vars.not_resolved){ctx.payload = ctx.payload.alert.hits.hits[0]._source;if (ctx.vars.fails_check == false) {ctx.payload.resolved_timestamp = ctx.execution_time;}} else {ctx.payload = ['timestamp': ctx.execution_time, 'metadata': ctx.metadata.xpack];}if (ctx.vars.fails_check) {ctx.payload.prefix = 'Elasticsearch cluster status is ' + state + '.';if (state == 'red') {ctx.payload.message = 'Allocate missing primary shards and replica shards.';ctx.payload.metadata.severity = 2100;} else {ctx.payload.message = 'Allocate missing replica shards.';ctx.payload.metadata.severity = 1100;}}ctx.vars.state = state.toUpperCase();ctx.payload.update_timestamp = ctx.execution_time;return ctx.payload;",
"lang": "painless"
}
],
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"java.util.ArrayList.rangeCheck(ArrayList.java:657)",
"java.util.ArrayList.get(ArrayList.java:433)",
"state = ctx.payload.check.hits.hits[0]._source.cluster_state.status;",
" ^---- HERE"
],
"script": "ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolved = !ctx.vars.fails_check && ctx.vars.not_resolved;def state = ctx.payload.check.hits.hits[0]._source.cluster_state.status;if (ctx.vars.not_resolved){ctx.payload = ctx.payload.alert.hits.hits[0]._source;if (ctx.vars.fails_check == false) {ctx.payload.resolved_timestamp = ctx.execution_time;}} else {ctx.payload = ['timestamp': ctx.execution_time, 'metadata': ctx.metadata.xpack];}if (ctx.vars.fails_check) {ctx.payload.prefix = 'Elasticsearch cluster status is ' + state + '.';if (state == 'red') {ctx.payload.message = 'Allocate missing primary shards and replica shards.';ctx.payload.metadata.severity = 2100;} else {ctx.payload.message = 'Allocate missing replica shards.';ctx.payload.metadata.severity = 1100;}}ctx.vars.state = state.toUpperCase();ctx.payload.update_timestamp = ctx.execution_time;return ctx.payload;",
"lang": "painless",
"caused_by": {
"type": "index_out_of_bounds_exception",
"reason": "Index: 0, Size: 0"
}
}
},
"actions": []
},
"messages": [
"failed to execute watch transform"
]
}
},

The same behaviour:

[2018-05-17T14:37:09,588][ERROR][o.e.x.w.t.s.ExecutableScriptTransform] [...] failed to execute [script] transform for [FBAo67kXTp20D-0bjJ4tjQ_elasticsearch_cluster_status_31be0f54-d2cd-445d-9fa9-c77fca8b8bd2-2018-05-17T14:37:09.581Z]
org.elasticsearch.script.ScriptException: runtime error
        at org.elasticsearch.painless.PainlessScript.convertToScriptException(PainlessScript.java:101) ~[?:?]
        at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:1070) ~[?:?]
        at org.elasticsearch.painless.ScriptImpl.run(ScriptImpl.java:105) ~[?:?]
        at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.doExecute(ExecutableScriptTransform.java:69) ~[x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.execute(ExecutableScriptTransform.java:53) ~[x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.execute(ExecutableScriptTransform.java:38) ~[x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.xpack.watcher.execution.ExecutionService.executeInner(ExecutionService.java:476) ~[x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.xpack.watcher.execution.ExecutionService.execute(ExecutionService.java:317) ~[x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.xpack.watcher.execution.ExecutionService.lambda$executeAsync$6(ExecutionService.java:421) ~[x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.xpack.watcher.execution.ExecutionService$WatchExecutionTask.run(ExecutionService.java:575) [x-pack-watcher-6.2.4.jar:6.2.4]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_171]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_171]
        at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(Unknown Source) ~[?:1.8.0_171]
        at java.util.ArrayList.get(Unknown Source) ~[?:1.8.0_171]
        at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:347) ~[?:?]
        ... 12 more

I can confirm that the _.monitoring-kibana*_ doesn't contain the _kibana_settings_ field. So, I think the following line will fail:

ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0)

What is the reason?.

I'm running the latest 6.3.0 BC6 build now and still on Windows 2012 Server and now I don't have the error in my elasticsearch.log. Maybe something was fixed?

I'm glad to hear that, but I'm on production so I can't install that version.

Thanks.

Hey there!

I'm also facing a similar issue, my error is as follows :
ElasticSearch-Version = 6.2.4

[2018-06-25T17:16:09,614][ERROR][o.e.x.w.t.s.ExecutableScriptTransform] [BYX6EyQ] failed to execute [script] transform for [ByB3kg32RSygLKwIp-KCSw_elasticsearch_cluster_status_e5481799-f6b6-4bc1-a909-c8fd85514544-2018-06-25T11:46:09.567Z]
org.elasticsearch.script.ScriptException: runtime error
    at org.elasticsearch.painless.PainlessScript.convertToScriptException(PainlessScript.java:101) ~[?:?]
    at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:1070) ~[?:?]
    at org.elasticsearch.painless.ScriptImpl.run(ScriptImpl.java:105) ~[?:?]
    at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.doExecute(ExecutableScriptTransform.java:69) ~[x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.execute(ExecutableScriptTransform.java:53) ~[x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.execute(ExecutableScriptTransform.java:38) ~[x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.xpack.watcher.execution.ExecutionService.executeInner(ExecutionService.java:476) ~[x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.xpack.watcher.execution.ExecutionService.execute(ExecutionService.java:317) ~[x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.xpack.watcher.execution.ExecutionService.lambda$executeAsync$6(ExecutionService.java:421) ~[x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.xpack.watcher.execution.ExecutionService$WatchExecutionTask.run(ExecutionService.java:575) [x-pack-watcher-6.2.4.jar:6.2.4]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
    at java.util.ArrayList.rangeCheck(ArrayList.java:657) ~[?:1.8.0_171]
    at java.util.ArrayList.get(ArrayList.java:433) ~[?:1.8.0_171]
    at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:347) ~[?:?]
    ... 12 more

Kindly comment, if any of you guys have overcome this!

I have the same issue on 6.4.2. I've not set up any kibana alerts, watches, anything. Just a fairly vanilla cluster on cloud.elastic.co. Any remedies very, very welcome

Pinging @elastic/kibana-app

Pinging @elastic/stack-monitoring

@kierenj Could you please provide the output of running following query against your Elasticsearch cluster?

GET .watches/_search?q=metadata.name:cluster&filter_path=hits.hits._id,hits.hits._source.metadata

@ycombinator https://gist.github.com/kierenj/fa49ab781425ead16a105363c4cc38b1 - hopefully nothing sensitive in there!

Thanks @kierenj. It looks like there are 2 watches in there and both were created in 6.2. So next, lets delete the watch causing the error and have X-Pack Monitoring recreate it. Here are the steps:

  1. Delete the offending watch:

    DELETE _xpack/watcher/watch/bg72CiWuSk6ntKxAZWimmw_elasticsearch_cluster_status
    
  2. Create a temporary local exporter to disable Cluster Alerts:

    PUT _cluster/settings
    {
     "transient": {
       "xpack.monitoring.exporters.my_temp_local": {
         "type": "local",
         "cluster_alerts.management.enabled": false
       }
     }
    }
    
  3. Delete the temporary local exporter. This will re-enable Cluster Alerts and should recreate the watch:

    PUT _cluster/settings
    {
     "transient": {
       "xpack.monitoring.exporters.my_temp_local.*": null
     }
    }
    
  4. Run the same query as before again to check if the watch was recreated:

    GET .watches/_search?q=metadata.name:cluster&filter_path=hits.hits._id,hits.hits._source.metadata
    
  5. Check that the error goes away from the ES logs.

Looks good, thanks @ycombinator

Given the comment by @ycombinator above, it seems that this has been fixed. Therefore, I am closing this issue. Of course, if the fix does not fully address this, we are more than happy to re-open this. Thanks, all.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

AlexIoannides picture AlexIoannides  路  138Comments

bquartier picture bquartier  路  79Comments

hvisage picture hvisage  路  170Comments

stormpython picture stormpython  路  74Comments

panda87 picture panda87  路  206Comments