Hi, I tried to do a rolling update from 6.1.2 to 6.2.2 and got the following message printed a few times...:
[2018-03-05T00:51:25,599][INFO ][o.e.x.m.e.l.LocalExporter] waiting for elected master node [{master}{AwR3s0L-QIOVSCPYWMuhIw}{KvG8KoaiStmxqWvX0pcOuQ}{xxxxxx.130}{xxxxxx.130:9300}{ml.machine_memory=3221225472, ml.max_open_jobs=20, ml.enabled=true}] to setup local exporter [default_local] (does it have x-pack installed?)
I'm running x-pack on the basic license.
My cluster consists of 3 masters, 3 coordinators, 2 ingest and 3 data. I followed the documentation for doing rolling upgrade and started with the master nodes first.
I downed one 6.1.2 master and added 6.2.2 master. Did this one by one. The message did not stop until all the 6.1.2 master nodes where shut off.
Btw the new master nodes where created on separate brand new docker containers. So I did not replace on top of the old nodes.
Maybe this process can be clarified a bit in the docs? Here is my original post: https://discuss.elastic.co/t/rolling-upgrade-sequence-of-node-types/120311/4
@pickypg this seems to come from monitoring, any ideas if or how we should improve documentation? I have the impression the current logging is already pointing in the right direction, not sure if either the log or the docs can be improved? Assigning to you in lack of a proper area label...
IMHO the log is a bit missleading because it tells you that x-pack is not installed. When it is.
Maybe there's a way to disable monitoring just for the upgrade?
Also is there a prefered order of upgrade when multinode cluster is involved?
Also I would like to add the nodes did not join until all masters where 6.2.2 and the last 6.1.2 master was turned off.
Pinging @elastic/es-core-infra
same for 6.3.0
And same for upgrade from 6.2.x to 6.4.x
Ever since upgrading from version 6.1.2 to 6.4.2 I had a similar experience, except that these messages where not printed a few times during upgrade, but every now and than ever after. I noticed that especially during the making of a snapshot these messages appeared, but also at other times when for instance a node left or joined the cluster.
Searching the Elasticsearch source code I found the solution below:
$ grep xpack /etc/elasticsearch/elasticsearch.yml
xpack.monitoring.exporters.my_local.type: local
xpack.monitoring.exporters.my_local.use_ingest: false
$
This feels a bit like cheating and I'm not sure about all the consequences. Especially not since https://www.elastic.co/guide/en/elasticsearch/reference/master/monitoring-settings.html#local-exporter-settings states the following about the use_ingest option:
"If disabled, then it means that it will not use pipelines, which means that a future release cannot automatically upgrade bulk requests to future-proof them."
But since I do not use xpack monitoring I feel this is an acceptable work-around to stop the pollution of my logfiles, until somebody with a deeper understanding about what's going on will solve the problem. Anybody an idea what is really going on?
waiting for elected master node ... to setup local exporter [default_local] (does it have x-pack installed?)
Anybody an idea what is really going on?
This message is shown on nodes other than the elected master each time (except the first) they receive a cluster state update from the elected master in which monitoring is not completely set up (i.e. a template or an ingest pipeline is missing).
One explanation for this is that the elected master is not configured to set up monitoring, perhaps because it does not have X-pack installed (hence the does it have x-pack installed?). However when a cluster is first forming the master has quite a few things to do, and may not get around to setting up monitoring for some time.
I think we should not emit this message until the node has been waiting an unreasonably long time for the master to set these things up.
So you suggest my elected master never gets around to setting up the monitoring, because I keep receiving these messages? Is there an easy way to find out if monitoring is successfully setup at my elected master? Perhaps by increasing the debug level or hopefully by simply running an API call?
@snarlistic you say
I do not use xpack monitoring
Have you disabled monitoring by setting xpack.monitoring.enabled: false on every node? I think that'll suppress these messages.
No, I have not. And this sounds like a bit better/cleaner solution than I myself came up with. Thank you!
Closing this in favour of #40898
Most helpful comment
IMHO the log is a bit missleading because it tells you that x-pack is not installed. When it is.
Maybe there's a way to disable monitoring just for the upgrade?
Also is there a prefered order of upgrade when multinode cluster is involved?