Elasticsearch: Heap_size error when network.host is set

Created on 19 May 2016  路  13Comments  路  Source: elastic/elasticsearch

Hello everyone,

Elasticsearch version:

  "version" : {
    "number" : "5.0.0-alpha2"

JVM version:
openjdk version "1.8.0_91"

OS version:

Distributor ID: Ubuntu
Description:    Ubuntu 14.04.3 LTS
Release:    14.04
Codename:   trusty

Description of the problem including expected versus actual behavior:

I'm trying to set up a cluster of 2 Elasticsearch nodes.
I'm experimenting a bug when I try to set the network.host setting in elasticsearch.yml.
When I let it commented no problem, Elasticsearch starts normally, but I believe that I need to set it in the config file of each node in order to make the nodes visible.

So the problem is that the log message concerns the HEAP_SIZE, (parameter that I tried to modify in /etc/default/elasticsearch), whereas my action is to set the network.host parameter.

Steps to reproduce:

  1. Given 2 VMs with the same configuration (Java / Elasticsearch). Each machine can ping the other one. The problem happens when I try to start Elasticsearch on a node (each of both, same problem).
  2. /etc/elasticsearch/elasticsearch.yml on machine 1 :
cluster.name: graylog
node.name: elasticsearch1
#network.host: ["IP.OF.NODE.1", "127.0.0.1"]
discovery.zen.ping.unicast.hosts: ["IP.OF.NODE.2"]
discovery.zen.minimum_master_nodes: 1
  1. Elasticsearch starts just fine. Now if we uncomment network.host
cluster.name: graylog
node.name: elasticsearch1
network.host: ["IP.OF.NODE.1", "127.0.0.1"]
discovery.zen.ping.unicast.hosts: ["IP.OF.NODE.2"]
discovery.zen.minimum_master_nodes: 1
  1. Now Elasticsearch refuses to start properly, with the following error message in the logs.

Provide logs (if relevant):

[2016-05-19 14:58:16,436][INFO ][node                     ] [elasticsearch1] version[5.0.0-alpha2], pid[5762], build[e3126df/2016-04-26T12:08:58.960Z]
[2016-05-19 14:58:16,437][INFO ][node                     ] [elasticsearch1] initializing ...
[2016-05-19 14:58:16,819][INFO ][plugins                  ] [elasticsearch1] modules [lang-mustache, lang-painless, ingest-grok, reindex, lang-expression, lang-groovy], plugins []
[2016-05-19 14:58:16,836][INFO ][env                      ] [elasticsearch1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [21.9gb], net total_space [27.4gb], spins? [possibly], types [ext4]
[2016-05-19 14:58:16,837][INFO ][env                      ] [elasticsearch1] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-05-19 14:58:18,701][INFO ][node                     ] [elasticsearch1] initialized
[2016-05-19 14:58:18,702][INFO ][node                     ] [elasticsearch1] starting ...
[2016-05-19 14:58:18,808][INFO ][transport                ] [elasticsearch1] publish_address {IP.OF.NODE.1:9300}, bound_addresses {127.0.0.1:9300}, {IP.OF.NODE.1:9300}
[2016-05-19 14:58:18,812][ERROR][bootstrap                ] [elasticsearch1] Exception
java.lang.RuntimeException: bootstrap checks failed
initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:93)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:66)
at org.elasticsearch.bootstrap.Bootstrap$5.validateNodeBeforeAcceptingRequests(Bootstrap.java:191)
at org.elasticsearch.node.Node.start(Node.java:323)
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:206)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:269)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:111)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:106)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)
at org.elasticsearch.cli.Command.main(Command.java:53)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:74)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
Suppressed: java.lang.IllegalStateException: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:94)
... 11 more
[2016-05-19 14:58:18,814][INFO ][node                     ] [elasticsearch1] stopping ...
[2016-05-19 14:58:18,827][INFO ][node                     ] [elasticsearch1] stopped
[2016-05-19 14:58:18,827][INFO ][node                     ] [elasticsearch1] closing ...
[2016-05-19 14:58:18,842][INFO ][node                     ] [elasticsearch1] closed

Describe the feature:
Any help or enlightenment would be appreciated :)

Thanks !

Most helpful comment

@nik9000 my jvm.options were set to be the same.

I found a solution though -- I had to do the following to bind anything other than localhost:

run sysctl -w vm.max_map_count=262144

why would setting my vm.max_map_count allow me to set network.host?

All 13 comments

Starting in Elasticsearch 5.0.0, there are bootstrap checks to ensure that common missed settings that would effect a production installation of Elasticsearch are configured. These bootstrap checks are enforced if the node is bound to a non-loopback interface, or if the node publishes to a non-loopback interface. Otherwise, these bootstrap checks appear as warning in the logs (check the logs from when you did not have network.host uncommented and you will see a warn-level log message).

initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap

To configure the heap size, you need to modify the jvm.options file. This is a new configuration file. There are two lines: -Xms and -Xmx to set the min and max heap size. These values should be equal, and must be equal if you trip the bootstrap checks.

There is documentation regarding the heap size that expands on my previous comment regarding how to set the heap size.

Okay thank you sir for the fast answer !

edit: Setting the parameter -Xms to the same value of -Xmx in /etc/elasticsearch/jvm.options solved the problem :) Thanks

Okay thank you sir for the fast answer !

You're very welcome.

I'm having the same issue on elasticsearch 5.0.0-alpha4

setting anything other than _lo_, 127.0.0.1, or _local_ results in an initial heap size error.

This is only an issue if the error isn't telling you to set -Xms and -Xmx to the same value. @jasontedor already explained what is up in https://github.com/elastic/elasticsearch/issues/18462#issuecomment-220323219

@nik9000 my jvm.options were set to be the same.

I found a solution though -- I had to do the following to bind anything other than localhost:

run sysctl -w vm.max_map_count=262144

why would setting my vm.max_map_count allow me to set network.host?

It is another one of the preflight checks. There are a half dozen of them. Things like "can I write to the data directory" and "am I likely to run out of memory mapped files". If the error message told you the right thing to do then I don't think this is a bug. If it was talking about heap and you had to set max_map_count, then we have a problem.

@nik9000 yeah I think we have a problem. The elasticsearch service starts just fine with the line network.host commented out, or set to anything that binds to localhost (_lo_, 127.0.0.1, or _local_). If you try to set network.host to something other than localhost, it won't start at all, throwing an initial heap size error until I set vm.max_map_count to 262144.

FYI, using CENTOS 7 minimum build 1511

it won't start at all, throwing an initial heap size error until I set vm.max_map_count to 262144.

Note that failing to start if the initial heap size is not equal to the maximum heap size and you're bound to an external interface is intentional. The same thing is true for vm.max_map_count. However, what should not be the case is that you get a log message for the initial heap size error if it is in fact equal to the max heap size, and that the actual problem is vm.max_map_count.

Can you provide corroborating log messages and relevant configuration and command line parameters?

Sure, I'll build a new VM and step through the process again step by step and post relevant logs and info.

Sure, I'll build a new VM and step through the process again step by step and post relevant logs and info.

Thanks.

Also, note that it's possible for both heap size configuration and vm.max_map_count to be a problem, and you will get an error message for both.

[2016-07-15 13:30:35,878][WARN ][bootstrap                ] [Locust] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupError: java.lang.RuntimeException: bootstrap checks failed
initial heap size [268435456] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents mlockall from locking the entire heap
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
Was this page helpful?
0 / 5 - 0 ratings

Related issues

abtpst picture abtpst  路  3Comments

dawi picture dawi  路  3Comments

matthughes picture matthughes  路  3Comments

ttaranov picture ttaranov  路  3Comments

brwe picture brwe  路  3Comments