Charts: Helm install `timed out waiting for the condition` and reports release "Failed"

Created on 4 Mar 2019  Â·  29Comments  Â·  Source: helm/charts

Problem Description

Installing spinnaker with helm results in Error: timed out waiting for the conditionand then the deployment is reported as FAILED

Reproduction steps

  1. Install chart

    $ helm install stable/spinnaker --name=spinnaker --namespace=spinnaker -f .\spinnaker\values.yml
    Error: timed out waiting for the condition
    
  2. Check helm ls reports the deployment as FAILED

    $ helm ls --namespace spinnaker
    NAME            REVISION        UPDATED                         STATUS  CHART           NAMESPACE
    spinnaker       1               Mon Mar  4 11:01:26 2019        FAILED  spinnaker-1.6.0 spinnaker
    
  3. kubectl get pods reports the pods running

    $ kubectl get pods -n spinnaker
    NAME                                READY     STATUS    RESTARTS   AGE
    spinnaker-install-using-hal-kf4xp   1/1       Running   1          6m
    spinnaker-minio-5dd5cfc985-l7ns2    1/1       Running   0          6m
    spinnaker-redis-master-0            1/1       Running   0          6m
    spinnaker-spinnaker-halyard-0       1/1       Running   0          6m
    

Version Info

Output of helm version:

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

  • Kubernetes on-premise
lifecyclstale

Most helpful comment

I wanted to share what I've learned after going through this several times. From what I've seen, this is the generic error you will get for many reasons. One time, I had the wrong key name in my spinnaker-config.yaml and I saw the time out error. Another time, I forgot to add a volume that is referenced in that file, same error.

This command helps diagnose the real issue:
kubectl -n spinnaker get events --sort-by='{.lastTimestamp}'

Look for errors, then fix that. Once I started doing that, I was able to fix the root cause of this message.

All 29 comments

Moving this issue to helm/charts as this seems to be a question specific towards an issue with the spinnaker chart.

I'm experiencing the exact same issue. K8s running on GKE.
ChartVersion: spinnaker-1.7.2, AppVersion: 1.11.6
Helm version: 2.12.1 (client/server)
K8s version: 1.12.5

Got this from Tiller:

[tiller-deploy-776b5cb874-g25kx] [tiller] 2019/03/06 18:25:25 warning: Release spinnaker post-upgrade spinnaker/templates/hooks/install-using-hal.yaml could not complete: timed out waiting for the condition

I have the same issue. I got the error.
[tiller] 2019/03/12 01:44:42 warning: Release my-release post-install spinnaker/templates/hooks/install-using-hal.yaml could not complete: timed out waiting for the condition
[tiller] 2019/03/12 01:44:42 warning: Release "my-release" failed post-install: timed out waiting for the condition
[storage] 2019/03/12 01:44:42 updating release "my-release.v1"
[tiller] 2019/03/12 01:44:42 failed install perform step: timed out waiting for the condition

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

Similar issue happening to me too on MacOS 10.13.6 using Minikube. I've encountered the exact error in the issue description, as well as this similar error

zjgoodma$ minikube version
minikube version: v1.0.0

zjgoodma$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:20:58Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

zjgoodma$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

zjgoodma$ helm init
Creating /Users/zjgoodma/.helm 
Creating /Users/zjgoodma/.helm/repository 
Creating /Users/zjgoodma/.helm/repository/cache 
Creating /Users/zjgoodma/.helm/repository/local 
Creating /Users/zjgoodma/.helm/plugins 
Creating /Users/zjgoodma/.helm/starters 
Creating /Users/zjgoodma/.helm/cache/archive 
Creating /Users/zjgoodma/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /Users/zjgoodma/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

zjgoodma$ helm install -n spinnaker stable/spinnaker --debug
[debug] Created tunnel using local port: '51535'

[debug] SERVER: "127.0.0.1:51535"

[debug] Original chart version: ""
[debug] Fetched stable/spinnaker to /Users/zjgoodma/.helm/cache/archive/spinnaker-1.8.1.tgz

[debug] CHART PATH: /Users/zjgoodma/.helm/cache/archive/spinnaker-1.8.1.tgz

Error: transport is closing

Meanwhile in another window I followed the tiller logs

zjgoodma$ kubectl -n kube-system logs tiller-deploy-c48485567-tfwjv --follow
[main] 2019/04/15 20:31:59 Starting Tiller v2.13.1 (tls=false)
[main] 2019/04/15 20:31:59 GRPC listening on :44134
[main] 2019/04/15 20:31:59 Probes listening on :44135
[main] 2019/04/15 20:31:59 Storage driver is ConfigMap
[main] 2019/04/15 20:31:59 Max history per release is 0
[tiller] 2019/04/15 20:50:10 preparing install for spinnaker
[storage] 2019/04/15 20:50:10 getting release history for "spinnaker"
[tiller] 2019/04/15 20:50:10 rendering spinnaker chart using values
2019/04/15 20:50:10 info: manifest "spinnaker/charts/minio/templates/post-install-create-bucket-job.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/secrets/additional-secrets.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/configmap/additional-scripts.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/redis-slave-svc.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/metrics-svc.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/minio/templates/ingress.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/configmap.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/secrets/gcs.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/redis-slave-deployment.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/ingress/gate.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/configmap/additional-configmaps.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/redis-rolebinding.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/networkpolicy.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/redis-role.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/ingress/deck.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/configmap/additional-profile-configmaps.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/minio/templates/statefulset.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/metrics-deployment.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/minio/templates/networkpolicy.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/templates/secrets/s3.yaml" is empty. Skipping.
2019/04/15 20:50:10 info: manifest "spinnaker/charts/redis/templates/redis-serviceaccount.yaml" is empty. Skipping.
[tiller] 2019/04/15 20:50:10 performing install for spinnaker
[tiller] 2019/04/15 20:50:10 executing 2 crd-install hooks for spinnaker
[tiller] 2019/04/15 20:50:10 hooks complete for crd-install spinnaker
[tiller] 2019/04/15 20:50:10 executing 2 pre-install hooks for spinnaker
[tiller] 2019/04/15 20:50:10 hooks complete for pre-install spinnaker
[storage] 2019/04/15 20:50:10 getting release history for "spinnaker"
[storage] 2019/04/15 20:50:10 creating release "spinnaker.v1"
[kube] 2019/04/15 20:50:10 building resources from manifest
[kube] 2019/04/15 20:50:10 creating 17 resource(s)
[tiller] 2019/04/15 20:50:10 executing 2 post-install hooks for spinnaker
[tiller] 2019/04/15 20:50:10 deleting post-install hook spinnaker-install-using-hal for release spinnaker due to "before-hook-creation" policy
[kube] 2019/04/15 20:50:10 Starting delete for "spinnaker-install-using-hal" Job
[kube] 2019/04/15 20:50:10 jobs.batch "spinnaker-install-using-hal" not found
[kube] 2019/04/15 20:50:10 building resources from manifest
[kube] 2019/04/15 20:50:10 creating 1 resource(s)
[kube] 2019/04/15 20:50:10 Watching for changes to Job spinnaker-install-using-hal with timeout of 5m0s
[kube] 2019/04/15 20:50:10 Add/Modify event for spinnaker-install-using-hal: ADDED
[kube] 2019/04/15 20:50:10 spinnaker-install-using-hal: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
[kube] 2019/04/15 20:50:10 Add/Modify event for spinnaker-install-using-hal: MODIFIED
[kube] 2019/04/15 20:50:10 spinnaker-install-using-hal: Jobs active: 1, jobs failed: 0, jobs succeeded: 0

After the final line was written the tiller pod restarted

zjgoodma$ kubectl -n kube-system logs tiller-deploy-c48485567-tfwjv
[main] 2019/04/15 20:55:21 Starting Tiller v2.13.1 (tls=false)
[main] 2019/04/15 20:55:21 GRPC listening on :44134
[main] 2019/04/15 20:55:21 Probes listening on :44135
[main] 2019/04/15 20:55:21 Storage driver is ConfigMap
[main] 2019/04/15 20:55:21 Max history per release is 0

The messages that said manifest "spinnaker/templates/secrets/additional-secrets.yaml" is empty. concerned me though so I deleted everything and started over, this time cloning the helm/charts repository and doing a helm install from that

zjgoodma$ pwd
/Users/zjgoodma/charts/stable/spinnaker

zjgoodma$ helm init
Creating /Users/zjgoodma/.helm 
Creating /Users/zjgoodma/.helm/repository 
Creating /Users/zjgoodma/.helm/repository/cache 
Creating /Users/zjgoodma/.helm/repository/local 
Creating /Users/zjgoodma/.helm/plugins 
Creating /Users/zjgoodma/.helm/starters 
Creating /Users/zjgoodma/.helm/cache/archive 
Creating /Users/zjgoodma/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /Users/zjgoodma/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

zjgoodma$ helm serve &
[2] 5835

zjgoodma$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879

zjgoodma$ helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 2 charts
Downloading redis from repo https://kubernetes-charts.storage.googleapis.com/
Downloading minio from repo https://kubernetes-charts.storage.googleapis.com/
Deleting outdated charts

zjgoodma$ helm install . -n spinnaker --debug
[debug] Created tunnel using local port: '51682'

[debug] SERVER: "127.0.0.1:51682"

[debug] Original chart version: ""
[debug] CHART PATH: /Users/zjgoodma/charts/stable/spinnaker

Error: watch closed before UntilWithoutRetry timeout

2nd round Tiller logs

zjgoodma$ kubectl -n kube-system logs tiller-deploy-c48485567-tfwjv --follow
[tiller] 2019/04/15 21:06:14 preparing install for spinnaker
[storage] 2019/04/15 21:06:14 getting release history for "spinnaker"
[tiller] 2019/04/15 21:06:14 rendering spinnaker chart using values
2019/04/15 21:06:14 info: manifest "spinnaker/templates/secrets/additional-secrets.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/minio/templates/statefulset.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/minio/templates/post-install-create-bucket-job.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/configmap.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/minio/templates/networkpolicy.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/secrets/gcs.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/configmap/additional-profile-configmaps.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/minio/templates/ingress.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/secrets/s3.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/configmap/additional-configmaps.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/redis-serviceaccount.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/metrics-svc.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/configmap/additional-scripts.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/redis-role.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/metrics-deployment.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/ingress/gate.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/templates/ingress/deck.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/redis-slave-svc.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/redis-slave-deployment.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/redis-rolebinding.yaml" is empty. Skipping.
2019/04/15 21:06:14 info: manifest "spinnaker/charts/redis/templates/networkpolicy.yaml" is empty. Skipping.
[tiller] 2019/04/15 21:06:14 performing install for spinnaker
[tiller] 2019/04/15 21:06:14 executing 2 crd-install hooks for spinnaker
[tiller] 2019/04/15 21:06:14 hooks complete for crd-install spinnaker
[tiller] 2019/04/15 21:06:14 executing 2 pre-install hooks for spinnaker
[tiller] 2019/04/15 21:06:14 hooks complete for pre-install spinnaker
[storage] 2019/04/15 21:06:14 getting release history for "spinnaker"
[storage] 2019/04/15 21:06:14 creating release "spinnaker.v1"
[kube] 2019/04/15 21:06:14 building resources from manifest
[kube] 2019/04/15 21:06:14 creating 17 resource(s)
[tiller] 2019/04/15 21:06:14 executing 2 post-install hooks for spinnaker
[tiller] 2019/04/15 21:06:14 deleting post-install hook spinnaker-install-using-hal for release spinnaker due to "before-hook-creation" policy
[kube] 2019/04/15 21:06:14 Starting delete for "spinnaker-install-using-hal" Job
[kube] 2019/04/15 21:06:14 jobs.batch "spinnaker-install-using-hal" not found
[kube] 2019/04/15 21:06:14 building resources from manifest
[kube] 2019/04/15 21:06:14 creating 1 resource(s)
[kube] 2019/04/15 21:06:14 Watching for changes to Job spinnaker-install-using-hal with timeout of 5m0s
[kube] 2019/04/15 21:06:14 Add/Modify event for spinnaker-install-using-hal: ADDED
[kube] 2019/04/15 21:06:14 spinnaker-install-using-hal: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
[kube] 2019/04/15 21:06:14 Add/Modify event for spinnaker-install-using-hal: MODIFIED
[kube] 2019/04/15 21:06:14 spinnaker-install-using-hal: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""

I still happened the same issue when I installing the charts with the command helm install --name my-release stable/spinnaker --timeout 600 --debug:

helm install --name my-release stable/spinnaker --timeout 600 --debug
[debug] Created tunnel using local port: '38577'

[debug] SERVER: "127.0.0.1:38577"

[debug] Original chart version: ""
[debug] CHART PATH: /root/charts/stable/spinnaker

Error: timed out waiting for the condition

Is there any solutions for it?

Same issue for me with OSX and GKE.

# helm install -n cd stable/spinnaker -f spinnaker-config.yaml --timeout 600 --version 1.1.6 --wait
Error: release cd failed: timed out waiting for the condition

Same issue for me with Google Cloud Shell and GKE.

I ultimately resolved it by deleting everything in the namespace and starting over. I was able to repeat this process (delete everything, install) with versions 1.1.6, 1.7.2, and 1.8.1.

I wanted to share what I've learned after going through this several times. From what I've seen, this is the generic error you will get for many reasons. One time, I had the wrong key name in my spinnaker-config.yaml and I saw the time out error. Another time, I forgot to add a volume that is referenced in that file, same error.

This command helps diagnose the real issue:
kubectl -n spinnaker get events --sort-by='{.lastTimestamp}'

Look for errors, then fix that. Once I started doing that, I was able to fix the root cause of this message.

I've had this issue for the past few days, outside of spinnaker. The nodes have a delayed liveness check of 1 minute, with 1 minute interval. The result is that a rolling deploy to three nodes might take more than 5 minutes.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Moving this issue to helm/charts as this seems to be a question specific towards an issue with the spinnaker chart.

I'm getting this with cert-manager, strangely.

Having the same issue. Only workaround for me is to manually delete/purge the helm chart and reinstall it again.
Can this issue please be re-opened? There seem to be several user having problems with this.

I'm having trouble with even purging. helm del --purge my-release

For me it was helpful to update to Helm v3. Don't know why, but it seems better now.

I wanted to share what I've learned after going through this several times. From what I've seen, this is the generic error you will get for many reasons. One time, I had the wrong key name in my spinnaker-config.yaml and I saw the time out error. Another time, I forgot to add a volume that is referenced in that file, same error.

This command helps diagnose the real issue:
kubectl -n spinnaker get events --sort-by='{.lastTimestamp}'

Look for errors, then fix that. Once I started doing that, I was able to fix the root cause of this message.

Thanks. Really useful command for debugging. Clarifying for kube-noobs who stumble upon this issue (like myself) "spinnaker" is the namespace here. If you are not using namespaces then no need to provide -n flag.

kubectl -n <namespace> get events --sort-by='{.lastTimestamp}'

in my case, it was the persistent-volume that caused the upgrade to fail. i am running k8 using docker desktop on local windows machine. Every time i run upgrade it tries to create another persistent volume which causes the failure. Even when i run delete/purge the persistent volume is not deleted and i have to go into the dashboard and MANUALLY delete the volume. only after that my upgrade/install works.

In case running kubectl get events does not return any helpful event, try checking the logs of your broken pod. It might be it is just crash looping

I wanted to share what I've learned after going through this several times. From what I've seen, this is the generic error you will get for many reasons. One time, I had the wrong key name in my spinnaker-config.yaml and I saw the time out error. Another time, I forgot to add a volume that is referenced in that file, same error.
This command helps diagnose the real issue:
kubectl -n spinnaker get events --sort-by='{.lastTimestamp}'
Look for errors, then fix that. Once I started doing that, I was able to fix the root cause of this message.

Thanks. Really useful command for debugging. Clarifying for kube-noobs who stumble upon this issue (like myself) "spinnaker" is the namespace here. If you are not using namespaces then no need to provide -n flag.

kubectl -n <namespace> get events --sort-by='{.lastTimestamp}'

This saved my day! Thanks for the command

same here

I've had this issue for the past few days, outside of spinnaker. The nodes have a delayed liveness check of 1 minute, with 1 minute interval. The result is that a rolling deploy to three nodes might take more than 5 minutes.

I think so.
pods are updated well,but helm upgrade fails.it seems like some point are stuck during helm upgrade

Thanks for the debug command!!

kubectl -n <namespace> get events --sort-by='{.lastTimestamp}'

In my case the cause of error was wrong image name being supplied due to template parsing.

4m32s       Normal    SuccessfulCreate    replicaset/app-f54db5c54   Created pod: app-f54db5c54-xpbhn
4m32s       Normal    ScalingReplicaSet   deployment/app             Scaled up replica set app-f54db5c54 to 1
3m35s       Warning   Failed              pod/app-f54db5c54-xpbhn    Error: InvalidImageName

Checking the events was really helpful! It showed me that my readiness probe (watching for a file to be created) was failing. The file was never created. And since I had --wait on my helm upgrade call, it was sitting there waiting for the pod to become ready, eventually timing out!

It happened to me twice, both times I increased the memory available in my kubernetes worker nodes, and also the cores which might be unrelated, and it went away. Individual memory on local cluster I went from 4Gi on each worker up to about 6 to 8 Gi, in GKE in GCP I went from 11.5 (I think this is combined memory of all 3 worker nodes) up to about 48 Gi

For me it was a wrongly named config map

Thank you for this command
kubectl -n get events --sort-by='{.lastTimestamp}'

In my case "TooManyLoadBalancers: Exceeded quota of account XXXXXXXXXXX"

im sorry, why doesnt helm just pipe events to stdout? or some sort of informative log?

@ekhaydarov , I think helm is piping the error message but it's not piping the complete error message. I searched for timed out waiting for the condition error in helm repo but I couldn't find it.

But there's such an error in the kubernetes repo. It seems to be a generic error used at many different places in Kubernetes. I guess the problem here is Helm is only printing this error message instead of one with more context (multiple level of errors which end with the above one which give more information about the actual problem) which you see in the events.

Was this page helpful?
0 / 5 - 0 ratings