Following the document to run kubeadm init --cloudprovider aws
http://kubernetes.io/docs/admin/kubeadm/
Following is not clear:
What should exactly be added?
On AWS EC2 instance
root@ip-10-43-0-30:~# uname -a
Linux ip-10-43-0-30 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
If these steps are not done, it gets stuck....
Running kubeadm init --cloud-provider aws, hangs after attempting a test deployment.
Hangs after this.
+1
OS:CentOS 7.2
Kubernetes version:1.5.1
Kubeadm init stuck waiting control plane to become ready when using --cloud-provider=aws
Kubeadm doc is not clear regard this point.
The documentation has been improved and updated, please read the cloudprovider section.
IIRC, cloud-config is not required for aws.
Thanks, could you please share the link for the doc you're referring to? The kubeadm guide still asking for manual config for cloud-config file:
http://kubernetes.io/docs/getting-started-guides/kubeadm/
Cloudprovider integrations (experimental)
Enabling specific cloud providers is a common request, this currently requires manual configuration and is therefore not yet supported. If you wish to do so, edit the kubeadm dropin for the kubelet service (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) on all nodes, including the master. If your cloud provider requires any extra packages installed on host, for example for volume mounting/unmounting, install those packages.
Specify the --cloud-provider flag to kubelet and set it to the cloud of your choice. If your cloudprovider requires a configuration file, create the file /etc/kubernetes/cloud-config on every node and set the values your cloud requires. Also append --cloud-config=/etc/kubernetes/cloud-config to the kubelet arguments.
Lastly, run kubeadm init --cloud-provider=xxx to bootstrap your cluster with cloud provider features.
This workflow is not yet fully supported, however we hope to make it extremely easy to spin up clusters with cloud providers in the future. (See this proposal for more information) The Kubelet Dynamic Settings feature may also help to fully automate this process in the future
http://kubernetes.io/docs/admin/kubeadm/
Currently, kubeadm init does not provide autodetection of cloud provider. This means that load balancing and persistent volumes are not supported out of the box. You can specify a cloud provider using --cloud-provider. Valid values are the ones supported by controller-manager, namely "aws", "azure", "cloudstack", "gce", "mesos", "openstack", "ovirt", "rackspace", "vsphere". In order to provide additional configuration for the cloud provider, you should create a /etc/kubernetes/cloud-config file manually, before running kubeadm init. kubeadm automatically picks those settings up and ensures other nodes are configured correctly. You must also set the --cloud-provider and --cloud-config parameters yourself by editing the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file appropriately.
Can we get further clarification? The referenced document still says that you should create a cloud-config.json but this issue thread says that it is not required for AWS. Does that mean that passing --cloud-provider=aws is enough? I'm assuming there are also some IAM roles required to make that work, if so where is that documented?
I was able to get this working, almost, or mostly, by giving the AmazonEC2FullAccess policy to a role that was granted to the EC2 node. I think my AZ was detected incorrectly because I got a NoVolumeZoneConflict, though the EBS volume was created for my PVC, I checked the AWS console and found it in a different zone. (I'd still be debugging, but someone appears to have maxed out my org's AWS request limits for the day. How rude!)
I will try to resolve this by manually adding an annotation to the PVC with the correct setting for failure-domain.beta.kubernetes.io/zone, or maybe I should set up my StorageClass properly with that annotation, not sure. I heard there are "lots of other things" to do on the AWS side to make this work correctly. If this can be resolved with a cloud-config file it would be great to get an example.
The issue I had with NoVolumeZoneConflict appears to be this one: kubernetes/kubernetes/issues/39178
@jbeda has documented how to use the AWS cloud provider here: https://docs.google.com/document/d/17d4qinC_HnIwrK0GHnRlD1FKkTNdN__VO4TH9-EzbIY/edit
Also, please use v1.6 and try to reproduce...
I went through the documentation and tried to mix and match this to get it to work to no avail. :/
@srflaxu40 I'm interested to help if it's broken. How is it failing for you? I can't get a look at it right now, but I'm happy to try spinning a node up later against the latest kubeadm.
The critical parts as I remember are:
kubeadm --cloud-provider=aws
kubelet --cloud-provider=aws (you need to edit the systemd unit that kubeadm generated)
All resources belonging to the cluster need a KubernetesCluster=your_cluster_name_here tag on AWS. Including I think, the VPC and/or security group.
I had a good comment with all of the needed parts on another related issue that went through the steps back in 1.5.2 days, I'll see if I can find it in case it helps you before I get around to it.
Edit: This seems to be the comment that I mentioned with the details in it. I don't see anything other than these three pieces in the note I left for myself about how to do it... ofc you also have to get your IAM roles right.
Most helpful comment
@jbeda has documented how to use the AWS cloud provider here: https://docs.google.com/document/d/17d4qinC_HnIwrK0GHnRlD1FKkTNdN__VO4TH9-EzbIY/edit
Also, please use v1.6 and try to reproduce...