Is this a request for help?:
Yes?
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report
Version of Helm and Kubernetes:
helm version
Client: v2.8.0
Server: v2.9.1
Client Version: v1.9.7
Server Version: v1.9.6-gke.2
Which chart:
stable/redis
What happened:
i wanted to configure custom password. i tried both helm install stable/redis --values custom-values-production.yaml and helm install --name my-release --set password=secretpassword stable/redis but it didnt work.
What you expected to happen:
it is configured by custom values not default ones.
How to reproduce it (as minimally and precisely as possible):
i dont know. i installed redis right after installing helm.
Anything else we need to know:
i looked up issues but couldnt find anything related to this.
Can you try passing your values.yaml file with the -f flag like below.
helm install stable/redis -f custom-values-production.yaml
Can you try passing your values.yaml file with the -f flag like below.
not worked, is there any way that i can give more info to you like verbose or anything like that? btw i didn't do anything specific, just installed helm and redis
It should have been worked. Not sure why it's not working for you.
Can you provide a little more details pls?
Hmm, is there any way that i can share helm logs (or something like that) with you? I don't know helm too much.
I use google cloud kubernetes engine. My machine is a macbook and i was installed minikube on it but i dont think that is a problem. I installed helm from homebrew. I also deployed postgresql with helm also and it works like a charm.
You can get the logs from your tiller-server for your helm command. Can you share that or share your custom values file.. will try to take a look at that.
i use https://github.com/helm/charts/blob/master/stable/redis/values-production.yaml with only custom password set. helm install stable/redis -f custom-values-production.yaml
here is the logs for --set flag
$ helm install stable/redis --set password=123123123 --debug
[debug] Created tunnel using local port: '54054'
[debug] SERVER: "127.0.0.1:54054"
[debug] Original chart version: ""
[debug] Fetched stable/redis to /Users/sems/.helm/cache/archive/redis-1.1.12.tgz
[debug] CHART PATH: /Users/sems/.helm/cache/archive/redis-1.1.12.tgz
NAME: doltish-quokka
REVISION: 1
RELEASED: Wed Sep 5 09:30:51 2018
CHART: redis-1.1.12
USER-SUPPLIED VALUES:
password: 123123123
COMPUTED VALUES:
args: null
image: bitnami/redis:4.0.8-r0
imagePullPolicy: IfNotPresent
metrics:
annotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
enabled: false
image: oliver006/redis_exporter
imagePullPolicy: IfNotPresent
imageTag: v0.11
resources: {}
networkPolicy:
allowExternal: true
enabled: false
nodeSelector: {}
password: 123123123
persistence:
accessMode: ReadWriteOnce
enabled: true
path: /bitnami
size: 8Gi
subPath: ""
podAnnotations: {}
podLabels: {}
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
service:
annotations: {}
loadBalancerIP: null
serviceType: ClusterIP
tolerations: []
usePassword: true
HOOKS:
MANIFEST:
---
# Source: redis/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: doltish-quokka-redis
labels:
app: doltish-quokka-redis
chart: "redis-1.1.12"
release: "doltish-quokka"
heritage: "Tiller"
type: Opaque
data:
redis-password: "eEJJMVBKUmdMUA=="
---
# Source: redis/templates/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: doltish-quokka-redis
labels:
app: doltish-quokka-redis
chart: "redis-1.1.12"
release: "doltish-quokka"
heritage: "Tiller"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: redis/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: doltish-quokka-redis
labels:
app: doltish-quokka-redis
chart: "redis-1.1.12"
release: "doltish-quokka"
heritage: "Tiller"
annotations:
spec:
type: ClusterIP
ports:
- name: redis
port: 6379
targetPort: redis
selector:
app: doltish-quokka-redis
---
# Source: redis/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: doltish-quokka-redis
labels:
app: doltish-quokka-redis
chart: "redis-1.1.12"
release: "doltish-quokka"
heritage: "Tiller"
spec:
template:
metadata:
labels:
app: doltish-quokka-redis
spec:
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- name: doltish-quokka-redis
image: "bitnami/redis:4.0.8-r0"
imagePullPolicy: "IfNotPresent"
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: doltish-quokka-redis
key: redis-password
ports:
- name: redis
containerPort: 6379
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: redis-data
mountPath: /bitnami
subPath:
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: doltish-quokka-redis
LAST DEPLOYED: Wed Sep 5 09:30:51 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
doltish-quokka-redis ClusterIP 10.29.247.221 <none> 6379/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
doltish-quokka-redis 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
doltish-quokka-redis-bd58d8c74-mr5zf 0/1 Pending 0 1s
==> v1/Secret
NAME TYPE DATA AGE
doltish-quokka-redis Opaque 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
doltish-quokka-redis Pending standard 1s
NOTES:
Redis can be accessed via port 6379 on the following DNS name from within your cluster:
doltish-quokka-redis.default.svc.cluster.local
To get your password run:
REDIS_PASSWORD=$(kubectl get secret --namespace default doltish-quokka-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
To connect to your Redis server:
1. Run a Redis pod that you can use as a client:
kubectl run --namespace default doltish-quokka-redis-client --rm --tty -i \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image bitnami/redis:4.0.8-r0 -- bash
2. Connect using the Redis CLI:
redis-cli -h doltish-quokka-redis -a $REDIS_PASSWORD
I am looking into this.
@ahmetsemsettinozdemirden I tried in my cluster. It works without any issue. I executed the following command.
helm install --name my-release --set password=secretpassword stable/redis
for this i could able to see the secret created whose value is secretpassword.
and I tried to connect from my local machine with the commands from the after release notes
REDIS_PASSWORD=secretpassword
kubectl port-forward --namespace conjur svc/my-release-redis-master 6379:6379 & redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
and it got connected.
could you please try again and let us know if it is other wise.
@ahmetsemsettinozdemirden please validate again with redis chart version 3.10.0.
sorry for late response, still no changes. i tried
$ helm install --name my-release --set password=secretpassword stable/redis
$ kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 --decode
and i got a random password.
@ahmetsemsettinozdemirden try running helm repo update. Then install the chart by explicitly specifying the version: helm install --name my-release --set password=secretpassword stable/redis --version 3.10.0
i did repo update then run install command but after 1 minutes tiller pod started to not respond. Now, i can not use helm at all. Here is the logs.
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install --name my-release --set password=secretpassword stable/redis --version 3.10.0
Error: transport is closing
$ helm list
Error: could not find a ready tiller pod
$ kubectl -n kube-system get pod tiller-deploy-75f5797b-rlbtm
NAME READY STATUS RESTARTS AGE
tiller-deploy-75f5797b-rlbtm 0/1 CrashLoopBackOff 199 16d
$ kubectl -n kube-system describe pod tiller-deploy-75f5797b-rlbtm
Name: tiller-deploy-75f5797b-rlbtm
Namespace: kube-system
Node: gke-soccergame-clust-n1-standard-4-po-57396fda-zj3f/10.132.0.3
Start Time: Tue, 04 Sep 2018 10:56:58 +0300
Labels: app=helm
name=tiller
pod-template-hash=31913536
Annotations: <none>
Status: Running
IP: 10.36.1.16
Controlled By: ReplicaSet/tiller-deploy-75f5797b
Containers:
tiller:
Container ID: docker://ab78b77c928f6d0403be2f684b170877a9c1b97db15a4656b8bb84cccb10e16a
Image: gcr.io/kubernetes-helm/tiller:v2.9.1
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:417aae19a0709075df9cc87e2fcac599b39d8f73ac95e668d9627fec9d341af2
Ports: 44134/TCP, 44135/TCP
State: Running
Started: Fri, 21 Sep 2018 08:48:17 +0300
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 21 Sep 2018 08:47:48 +0300
Finished: Fri, 21 Sep 2018 08:48:11 +0300
Ready: False
Restart Count: 199
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-cl76z (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
tiller-token-cl76z:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-cl76z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 49m (x148 over 11d) kubelet, gke-soccergame-clust-n1-standard-4-po-57396fda-zj3f Liveness probe failed: Get http://10.36.1.16:44135/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Normal Pulled 41m (x188 over 2d) kubelet, gke-soccergame-clust-n1-standard-4-po-57396fda-zj3f Container image "gcr.io/kubernetes-helm/tiller:v2.9.1" already present on machine
Warning Unhealthy 4m (x226 over 3d) kubelet, gke-soccergame-clust-n1-standard-4-po-57396fda-zj3f Readiness probe failed: Get http://10.36.1.16:44135/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning BackOff 1m (x1924 over 2d) kubelet, gke-soccergame-clust-n1-standard-4-po-57396fda-zj3f Back-off restarting failed container
After ten minutes and a lot of restarts tiller is successfully deployed. Its working now, Thanks a lot!
Most helpful comment
@ahmetsemsettinozdemirden try running
helm repo update. Then install the chart by explicitly specifying the version:helm install --name my-release --set password=secretpassword stable/redis --version 3.10.0