Describe the feature:
Elasticsearch version (bin/elasticsearch --version
):
5.3.2 - 5.6.3
Plugins installed: [None]
JVM version (java -version
):
1.8.0_77-b03
OS version (uname -a
if on a Unix-like system):
Linux SVR14982HW1288 2.6.32-642.6.2.el6.x86_64 #1 SMP Wed Oct 26 06:52:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
A production cluster running version 5.3.2 was experiencing very high heap usage after bulk updating some documents. The search load was very light and the cluster was almost idle. Old GC could not reclaim memory even after the bulk requests had ceased running. After some investigation, log4j seems to be the culprit as it holds strong reference to Bulkshardrequests object whenever any exception was thrown during the bulk request execution.
The problem seems quite similar to issues#23798 . But I can reproduce the problem on latest stable version 5.6.3, so the root cause could be different.
Steps to reproduce:
Update a bunch of documents with bulk API, but purposely generate some exceptions, for example, leave a couple of requests with non-existence doc_id or with wrong field type. From ES logs , DEBUG messages pop up showing document missing or mapper parsing exception. The heap used increased significantly depending on a single bulk size. Dump the heap and analyze with MAT, the Bulkshardrequests object is referenced by Log4j's ParameterizedMessage.
The only way I can reclaim the memory is to issue a small bulk request that triggers another exceptions. In which case, the ParameterizedMessage object references the new request with small memory footprint.
Below are the sample heap dump stats for our production cluster:
It looks like this relates to log4j's ReusableLogEventFactory, which set MutableLogEvent as thread_local.
public class ReusableLogEventFactory implements LogEventFactory {
private static final ThreadNameCachingStrategy THREAD_NAME_CACHING_STRATEGY = ThreadNameCachingStrategy.create();
private static final Clock CLOCK = ClockFactory.getClock();
private static ThreadLocal<MutableLogEvent> mutableLogEventThreadLocal = new ThreadLocal<>();
private final ContextDataInjector injector = ContextDataInjectorFactory.createInjector();
I tried adding -Dlog4j2.enable.threadlocals=false to ES's jvm.options. Bulk exception won't retain huge memory since then. So I think this is the way to get around the problem.
This log4j reference Garbage-free Steady State Logging explains the possibility of memory leaks.
Garbage-free logging in Log4j 2.6 is partially implemented by reusing objects in ThreadLocal fields, and partially by reusing buffers when converting text to bytes.
ThreadLocal fields holding non-JDK classes can cause memory leaks in web applications when the application server's thread pool continues to reference these fields after the web application is undeployed. To avoid causing memory leaks, Log4j will not use these ThreadLocals when it detects that it is used in a web application (when the javax.servlet.Servlet class is in the classpath, or when system property log4j2.is.webapp is set to "true").
Most helpful comment
This log4j reference Garbage-free Steady State Logging explains the possibility of memory leaks.