Is this a request for help?: no
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
$ helm version
Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.12+IKS", GitCommit:"e8eb9852773d389302279383bcf8ce750466327e", GitTreeState:"clean", BuildDate:"2019-01-09T06:27:27Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Which chart:
stable/sonarqube
What happened:
In values.yaml, enabling persistance with below values causes SonarQube to exit during startup, resulting in a CrashLoopBackOff.
What you expected to happen:
SonarQube starts normally
How to reproduce it (as minimally and precisely as possible):
helm install stable/sonarqube -f values.yaml
values.yaml:
persistence:
enabled: true
accessMode: ReadWriteOnce
size: 10Gi
Anything else we need to know:
Looks very similar to #10045. I also tried the following:
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm search sonarqube -l
NAME VERSION DESCRIPTION
stable/sonarqube 0.13.4 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.13.3 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.13.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.13.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.13.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.12.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.12.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.12.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.11.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.10.3 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.10.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.10.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.9.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.9.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.8.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.8.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.7.4 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.7.3 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.7.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.7.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.7.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.6.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.6 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.5 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.4 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.3 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.5.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.4.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.4.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.4.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.9 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.8 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.7 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.6 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.5 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.4 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.3 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.2 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.3.1 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.2.0 Sonarqube is an open sourced code quality scann...
stable/sonarqube 0.1.0 Sonarqube is an open sourced code quality scann...
$ helm list sonar
NAME REVISION UPDATED STATUS CHART NAMESPACE
sonar 15 Sat Feb 9 17:18:21 2019 DEPLOYED sonarqube-0.13.4 sonarqube
I see the following in the debug logs:
$ kubectl logs sonar-sonarqube-6ff4554ff4-sm8gs
.2019.02.09 18:06:00 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2019.02.09 18:06:00 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:35241
2019.02.09 18:06:00 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch -Epath.conf=/opt/sonarqube/temp/conf/es
2019.02.09 18:06:00 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2019.02.09 18:06:01 INFO app[][o.e.p.PluginsService] no modules loaded
2019.02.09 18:06:01 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [force_merge], size [1], queue size [unbounded]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_started], core [1], max [32], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [listener], size [8], queue size [unbounded]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [index], size [16], queue size [200]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [refresh], core [1], max [8], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [generic], core [4], max [128], keep alive [30s]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [warmer], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [search], size [25], queue size [1k]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [flush], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [fetch_shard_store], core [1], max [32], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [management], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [get], size [16], queue size [1k]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [bulk], size [16], queue size [200]
2019.02.09 18:06:01 DEBUG app[][o.e.t.ThreadPool] created thread pool: name [snapshot], core [1], max [5], keep alive [5m]
2019.02.09 18:06:01 DEBUG app[][o.e.c.n.IfConfig] configuration:
lo
inet 127.0.0.1 netmask:255.0.0.0 scope:host
UP LOOPBACK mtu:65536 index:1
eth0
inet 172.30.93.146 netmask:255.255.255.255 broadcast:0.0.0.0 scope:site
hardware 0A:8A:8C:1F:C2:3B
UP MULTICAST mtu:1480 index:4
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Module execution: 70ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] TypeListeners creation: 2ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Scopes creation: 2ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Converters creation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding creation: 5ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Private environment creation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Injector construction: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding initialization: 4ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding indexing: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Collecting injection requests: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Binding validation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Static validation: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Instance member validation: 1ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Provider verification: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Static member injection: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Instance injection: 0ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.i.i.Stopwatch] Preloading singletons: 1ms
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] node_sampler_interval[5s]
2019.02.09 18:06:02 DEBUG app[][i.n.c.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 32
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: false
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] Java version: 8
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] sun.misc.Unsafe.copyMemory: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Buffer.address: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] direct buffer constructor: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.Bits.unaligned: available, true
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.<init>(long, int): available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] sun.misc.Unsafe: available
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: false
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: 15010365440 bytes
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.CleanerJava6] java.nio.ByteBuffer.cleaner(): available
2019.02.09 18:06:02 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.noKeySetOptimization: false
2019.02.09 18:06:02 DEBUG app[][i.n.c.n.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
2019.02.09 18:06:02 DEBUG app[][i.n.u.i.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] adding address [{#transport#-1}{KPBTCvJaSZKWXud28q259Q}{127.0.0.1}{127.0.0.1:35241}]
2019.02.09 18:06:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.processId: 9 (auto-detected)
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv4Stack: false
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] -Djava.net.preferIPv6Addresses: false
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] Loopback interface: lo (lo, 127.0.0.1)
2019.02.09 18:06:02 DEBUG app[][i.netty.util.NetUtil] /proc/sys/net/core/somaxconn: 128
2019.02.09 18:06:02 DEBUG app[][i.n.c.DefaultChannelId] -Dio.netty.machineId: 0a:8a:8c:ff:fe:1f:c2:3b (auto-detected)
2019.02.09 18:06:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
2019.02.09 18:06:02 DEBUG app[][i.n.u.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 32
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 32
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
2019.02.09 18:06:02 DEBUG app[][i.n.b.PooledByteBufAllocator] -Dio.netty.allocator.useCacheForAllThreads: true
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.allocator.type: pooled
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 65536
2019.02.09 18:06:02 DEBUG app[][i.n.b.ByteBufUtil] -Dio.netty.maxThreadLocalCharBufferSize: 16384
2019.02.09 18:06:02 DEBUG app[][o.e.c.t.TransportClientNodesService] failed to connect to node [{#transport#-1}{KPBTCvJaSZKWXud28q259Q}{127.0.0.1}{127.0.0.1:35241}], ignoring...
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:35241] connect_timeout[30s]
at org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:362)
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:570)
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117)
at org.elasticsearch.transport.TransportService.openConnection(TransportService.java:351)
at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:407)
at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:357)
at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:319)
at org.sonar.application.es.EsConnectorImpl$MinimalTransportClient.<init>(EsConnectorImpl.java:109)
at org.sonar.application.es.EsConnectorImpl.buildTransportClient(EsConnectorImpl.java:90)
at org.sonar.application.es.EsConnectorImpl.getTransportClient(EsConnectorImpl.java:75)
at org.sonar.application.es.EsConnectorImpl.getClusterHealthStatus(EsConnectorImpl.java:62)
at org.sonar.application.process.EsProcessMonitor.checkStatus(EsProcessMonitor.java:90)
at org.sonar.application.process.EsProcessMonitor.checkOperational(EsProcessMonitor.java:75)
at org.sonar.application.process.EsProcessMonitor.isOperational(EsProcessMonitor.java:60)
at org.sonar.application.process.SQProcess.refreshState(SQProcess.java:161)
at org.sonar.application.process.SQProcess$EventWatcher.run(SQProcess.java:220)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:35241
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 10 common frames omitted
2019.02.09 18:06:02 DEBUG app[][o.s.a.e.EsConnectorImpl] Connected to Elasticsearch node: [127.0.0.1:35241]
2019.02.09 18:06:02 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2019.02.09 18:06:02 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2019.02.09 18:06:02 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
I've tried various ports for ElasticSearch and always see the same thing. I've even set sonar.search.port=0 in the sonar.properties to choose a free port, but I still see this issue. I keep seeing information about not running as root in relation to these error messages, but I'm using the helm chart and docker image, and from what I can tell it's all running as the sonarqube user:
$ kubectl exec -it sonar-sonarqube-6ff4554ff4-4f7p2 bash
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ whoami
sonarqube
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ pwd
/opt/sonarqube
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ ls -lah
total 52K
drwxr-xr-x 1 sonarqube sonarqube 4.0K Jan 22 09:17 .
drwxr-xr-x 1 root root 4.0K Feb 6 17:41 ..
-rw-r--r-- 1 sonarqube sonarqube 7.5K Jan 22 09:11 COPYING
drwxr-xr-x 1 sonarqube sonarqube 4.0K Feb 6 17:41 bin
drwxrwsrwx 3 root sonarqube 4.0K Feb 9 17:50 conf
drwxr-xr-x 2 nobody 4294967294 4.0K Feb 9 17:22 data
drwxr-xr-x 7 sonarqube sonarqube 4.0K Jan 22 09:12 elasticsearch
drwxr-xr-x 1 sonarqube sonarqube 4.0K Jan 22 09:12 extensions
drwxr-xr-x 6 sonarqube sonarqube 4.0K Jan 22 09:17 lib
drwxr-xr-x 2 sonarqube sonarqube 4.0K Jan 22 09:12 logs
drwxr-xr-x 2 sonarqube sonarqube 4.0K Jan 22 09:12 temp
drwxr-xr-x 9 sonarqube sonarqube 4.0K Jan 22 09:18 web
sonarqube@sonar-sonarqube-6ff4554ff4-4f7p2:/opt/sonarqube$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
sonarqu+ 1 0.1 0.0 4292 768 ? Ss 17:51 0:00 /bin/sh /usr/local/copy_plugins.sh
sonarqu+ 9 0.0 0.0 19972 3848 pts/0 Ss 17:51 0:00 bash
sonarqu+ 18 214 0.3 21173684 226328 ? Sl 17:51 0:04 java -jar lib/sonar-application-7.6.jar -Dsonar.log.console=true -Dsonar.jdbc.username=sonarUser -Dsonar.jdbc.password=sonarPass -Dsonar.jdbc.url=jdbc:postgresql://sonar-po
sonarqu+ 53 136 1.7 6216504 1150692 ? Sl 17:51 0:02 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server
sonarqu+ 150 0.0 0.0 38388 3188 pts/0 R+ 17:51 0:00 ps aux
I've double checked the resources which are the following:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
These are all within the limits of my Kubernetes cluster. I really only see this problem when using persistence, as I've been able to get SonarQube running before when persistence is set to false.
Same issue here.
Same here, #11092. Go to this Katacoda playground and after K8s starts then see the install fail with just:
helm install stable/sonarqube
The problem I was having above was causing a CrashLoopBackOff but it was happening because the PV and PVC was not setup correctly. I resolved this and now getting something more similar to what @out-of-band reported, something around the ElasticSearch connection:
One can easily reproduce and perhaps diagnose the problem here on Katacoda
master $ kubectl get pods -n sonarqube
NAME READY STATUS RESTARTS AGE
sonar-postgresql-58b959887-kjskh 1/1 Running 0 21m
sonar-sonarqube-74ddc9c7b4-wbs7n 0/1 CrashLoopBackOff 8 21m
master $ kubectl logs sonar-sonarqube-74ddc9c7b4-wbs7n -n sonarqube
2019.03.21 03:45:49 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2019.03.21 03:45:49 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2019.03.21 03:45:49 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]]from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2019.03.21 03:45:49 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2019.03.21 03:45:49 INFO app[][o.e.p.PluginsService] no modules loaded
2019.03.21 03:45:49 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.03.21 03:45:51 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2019.03.21 03:45:51 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2019.03.21 03:45:51 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
Also having this problem and I can't find any solution by googling.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
I'm facing this problem with the latest chart on AKS, any tips I can apply?
Edit: I disabled persistence and still get the same behavior
the wired thing is that if you remove the memory limit it works for the first scan and then crashes afterward.
Most helpful comment
The problem I was having above was causing a CrashLoopBackOff but it was happening because the PV and PVC was not setup correctly. I resolved this and now getting something more similar to what @out-of-band reported, something around the ElasticSearch connection:
One can easily reproduce and perhaps diagnose the problem here on Katacoda