Charts: [stable/cluster-autoscaler] ASGs: NoCredentialProviders: no valid providers in chain.

Created on 1 Aug 2018  路  6Comments  路  Source: helm/charts

Version of Helm and Kubernetes:

$helm version

Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"archive", BuildDate:"2018-06-29T00:44:47Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
[stable/cluster-autoscaler]

What happened:
I believe I have all the AIMs and tags in place, and I invoked the chart with:

helm install stable/cluster-autoscaler --name hdf-autoscaler --set autoDiscovery.clusterName=HdfLab-Kubernetes

when I tail the autoscaler pod logs, I see:

I0801 11:59:31.052288       1 leaderelection.go:199] successfully renewed lease default/cluster-autoscaler
I0801 11:59:33.057590       1 leaderelection.go:199] successfully renewed lease default/cluster-autoscaler
E0801 11:59:33.803653       1 aws_manager.go:237] Failed to fetch ASGs: cannot autodiscover ASGs: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
F0801 11:59:33.803674       1 cloud_provider_builder.go:137] Failed to create AWS Manager: cannot autodiscover ASGs: NoCredentialProviders: no valid providers in chain. Deprecated.

Any ideas about what is wrong or how to debug?

lifecyclstale

Most helpful comment

i fixed the issue. i thought i could use the instance role iam for those permissions, but i not only did i have to use kube2iam but i also didn鈥檛 allow the assumerole for the master instance role.

All 6 comments

I'm having the same issue with this helm chart. I thought it was an IAM issue so I set the proper IAM role via kube2iam but it didn't resolve the issue.

I have the same issue with k8s 1.9. I resolved it by downgrading autoscaler version to 1.1.3 (as recommended here: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md#releases). Also i used kube2iam with specific IAM role (as described in README)

Hmm, I have k8s 1.10 and trying out 1.2.2 and 1.1.3 and I have the same errors. I also tries it with and without kube2iam role as well.

i fixed the issue. i thought i could use the instance role iam for those permissions, but i not only did i have to use kube2iam but i also didn鈥檛 allow the assumerole for the master instance role.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings