This issue is about to make a checklist of what we need to do before we can move to kubeadm as default deployment.
kube_basic_auth works. It requires volume mounts to the apiserver. #3351kube_token_auth works. It requires volume mounts to the apiserver. #3351profilingenable-aggregator-routingrepair-malformed-updatesanonymous-authPlease feel free to add comments and I'll update the list here.
Currently you're only able to go from a non-kubeadm deployment to a kubeadm deployment. We need to update the playbook so you can go from a kubeadm deployment to a non-kubeadm deployment
Why ?
@woopstar Thanks for starting this!
Question I think we need to answer: How much variance do we actually need between ansible templated and kubeadm generated static pod manifests?
If it turns out we can use kubeadm to generate a manifest that will work in place for either provisioning path, could we retire the ansible templates and always use kubeadm? See: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-controlplane
That's how I'd like to approach this transformation effort across the project, phase by phase. In cases where there's no advantage to using an ansible native approach compared to an equivalent kubeadm phase, I think we should always favor kubeadm and remove redundant options.
Currently you're only able to go from a non-kubeadm deployment to a kubeadm deployment. We need to update the playbook so you can go from a kubeadm deployment to a non-kubeadm deployment
Why ?
This was something @mattymo wanted before we switched.
@woopstar Thanks for starting this!
Question I think we need to answer: How much variance do we actually need between ansible templated and kubeadm generated static pod manifests?
If it turns out we can use kubeadm to generate a manifest that will work in place for either provisioning path, could we retire the ansible templates and always use kubeadm? See: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-controlplane
That's how I'd like to approach this transformation effort across the project, phase by phase. In cases where there's no advantage to using an ansible native approach compared to an equivalent kubeadm phase, I think we should always favor kubeadm and remove redundant options.
My opinion is that we do not want variation. There should be no need to maintain two versions of the manifest files. If we want to alter something on them, then we should look back and see "why do we want to do so?" and then maybe add it as a PR to the kubeadm itself so we only maintain templates in one place.
Kubeadm does, in fact, create manifest files in Kubespray now when you deploy, so you only need to keep the kubeadm config files in "sync" and I like that a lot.
Updated the checklist.
Since we seem to all agree that kubeadm only phases are the path forward:
roles/kubernetes/master and other roles and update the non-kubeadm default paths? This would allow us to always use the kubeadm conventions decribed here.Updated the checklist.
Since we seem to all agree that kubeadm only phases are the path forward:
- Do we want to get the ball rolling by making the kubeadm CLI download tasks non-optional?
- Should we always run the non-kubeadm -> kubeadm conversion tasks in
roles/kubernetes/masterand other roles and update the non-kubeadm default paths? This would allow us to always use the kubeadm conventions decribed here.
I guess if we rename our manifests and files to match the scheme provided by kubeadm, the transitions between non-kubeadm and kubeadm is fairly easy as the files will just be overwritten with either of the configs?
a kubeadm deployment to a non-kubeadm deployment
I still don't have a clear answer to this.
I would skip that if it's too complex. as we probably don't want to maintain non-kubeadm after the migration.
a kubeadm deployment to a non-kubeadm deployment
I still don't have a clear answer to this.
I would skip that if it's too complex. as we probably don't want to maintain non-kubeadm after the migration.
I agree. But I guess @mattymo should answer to this as it was his initial demand.
Just let us know how we can help.
We need to test and verify the cloud provider are working as expected with kubeadm. I think that is the last part.
Can we check that scaling up the cluster (adding new master and/or node) with kubeadm works ?
Currently you're only able to go from a non-kubeadm deployment to a kubeadm
is this already handled? how and where?
Does anyone know how to convert a non-kubeadm installation to a kubeadm installation? Can't find any info. Now we use Kubernetes 1.16.8 with Kubespray fork with support of non-kubeadm: https://github.com/southbridgeio/kubespray
Most helpful comment
My opinion is that we do not want variation. There should be no need to maintain two versions of the manifest files. If we want to alter something on them, then we should look back and see "why do we want to do so?" and then maybe add it as a PR to the kubeadm itself so we only maintain templates in one place.
Kubeadm does, in fact, create manifest files in Kubespray now when you deploy, so you only need to keep the kubeadm config files in "sync" and I like that a lot.