Version of Helm and Kubernetes:
Helm: v2.10.0
Kubernetes: v1.10.5
Which chart:
stable/redis
What happened:
Installed redis, but unable to configure configmap.
What you expected to happen:
I expected the following to work for creating the redis config via values.yaml:
```## Redis config file
configmap: |-
tcp-keepalive 300
tcp-keepalive 0
```
However, that errors out, setting the values to be yaml does not create a configmap.
How to reproduce it (as minimally and precisely as possible):
Try to create a configmap using redis values.yaml file.
Anything else we need to know:
Need more documentation. Configuring redis via flags is not sufficient. All my attempts at creating a configmap have failed for redis via the stable/redis helm chart.
+1 The following modification to configmap also results in failed deployments:
configmap: |-
maxmemory 200mb
maxmemory-policy volatile-lfu
ive managed to get it working by having
configmap: |+
protected-mode no
and making sure the start command args had the conf file path
args: ['redis-server', '/opt/bitnami/redis/etc/redis.conf']
these are all default in stable/redis
However, I agree some more documentation around redis configmaps would be great
I am not able to reproduce the issue, using stable/redis 4.2.10 and a custom values.yaml as:
configmap: |-
client-output-buffer-limit pubsub 32mb 8mb 60
maxmemory 1gb
maxmemory-policy noeviction
Then within the pod:
I have no name!@redis-master-0:/$ redis-cli config get maxmemory
1) "maxmemory"
2) "1073741824"
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
configmap: |-
maxmemory 200mb
maxmemory-policy volatile-lfu
Me neither.. Using the previous configmap parameter and running the commands below:
$ helm install stable/redis --name redis -f values.yaml
$ export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode)
$ kubectl run --namespace default redis-client --rm --tty -i --restart='Never' \
> --env REDIS_PASSWORD=$REDIS_PASSWORD \
> --image docker.io/bitnami/redis:4.0.12 -- bash
I have no name!@redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD config get maxmemory
Warning: Using a password with '-a' option on the command line interface may not be safe.
1) "maxmemory"
2) "209715200"
I have no name!@redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD config get maxmemory-policy
Warning: Using a password with '-a' option on the command line interface may not be safe.
1) "maxmemory-policy"
2) "volatile-lfu"
Hello,
In my case I do not have this issue , here my configmap:
configmap: |-
maxmemory-policy volatile-lru
cluster-enabled yes
cluster-config-file /bitnami/redis/data/nodes.conf
cluster-node-timeout 5000
appendonly yes
port 6379
protected-mode no
The result inside the master pod:
I have no name!@redistest-master-0:/$ cat /opt/bitnami/redis/etc/redis.conf
# User-supplied configuration:
maxmemory-policy volatile-lru
cluster-enabled yes
cluster-config-file /bitnami/redis/data/nodes.conf
cluster-node-timeout 5000
appendonly yes
port 6379
protected-mode no
But my redis-slave pods are not running
kubectl get pods -n rediscluster1
NAME READY STATUS RESTARTS AGE
redistest-master-0 1/1 Running 0 8m
redistest-metrics-746b4d7bfc-s7v5p 1/1 Running 0 15m
redistest-slave-588c647779-9czc9 0/1 CrashLoopBackOff 7 15m
redistest-slave-588c647779-k9gkc 0/1 CrashLoopBackOff 7 15m
redistest-slave-6bd5c74db9-bbtkv 0/1 CrashLoopBackOff 6 9m
redistest-slave-6bd5c74db9-mbdkf 0/1 CrashLoopBackOff 6 9m
Logs:
kubectl logs redistest-slave-6bd5c74db9-mbdkf -n rediscluster1
[0m[38;5;2mINFO [0m ==> ** Starting Redis **
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 0
>>> '(null)'
replicaof directive not allowed in cluster mode
I don't have the "replicaof" in my configmap and I don't know where it have been declarated, so is it possible to configure redis in cluster mode with this chart?
When using a custom configmap, it's mounted at /opt/bitnami/redis/etc both in the master&slaves nodes as you can check in the links below:
https://github.com/helm/charts/blob/master/stable/redis/templates/redis-master-statefulset.yaml#L148
https://github.com/helm/charts/blob/master/stable/redis/templates/redis-master-statefulset.yaml#L170
https://github.com/helm/charts/blob/master/stable/redis/templates/redis-slave-deployment.yaml#L148
https://github.com/helm/charts/blob/master/stable/redis/templates/redis-slave-deployment.yaml#L158
The image used on this chart, uses the redis.conf mounted and skips the rest of configuration (replicacion included) when it detects it, check the logic below:
https://github.com/bitnami/bitnami-docker-redis/blob/master/5.0/debian-9/rootfs/libredis.sh#L311
Therefore, when using a custom configuration file, there's no way to configure master/slave configuration with the current approach. We should think about how to solve this, what do you think @javsalgar ?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Beep.
Any update on this issue?
No updates so far, sorry for the delay. We have it in our scope but we did not have time to work on a solution for this yet.
We'll keep you updated as soon as we have more news! Thanks for your patience and understanding.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Beep.
We still have the corresponding task in our backlog but I'm afraid there's no ETA to address it.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
sigh