Kubespray: Move to kubeadm as default deployment

Created on 13 Sep 2018  路  13Comments  路  Source: kubernetes-sigs/kubespray

This issue is about to make a checklist of what we need to do before we can move to kubeadm as default deployment.

  • [x] Make sure kube_basic_auth works. It requires volume mounts to the apiserver. #3351
  • [x] Make sure kube_token_auth works. It requires volume mounts to the apiserver. #3351
  • [x] Make sure node-labels are up to date on kubeadm deployment.
  • [x] Make sure cloud_provider is works on kubeadm deployments #3766
  • [x] Make sure settings from out templates are set in the kubeadm deployment config. #3344 #3383
    The following settings are currently missing:
  • [x] Currently you're only able to go from a non-kubeadm deployment to a kubeadm deployment. We need to update the playbook so you can go from a kubeadm deployment to a non-kubeadm deployment
  • [x] Add deprecation warnings about non-kubeadm deployments.#3759
  • [x] If we release Kubespray with 1.11.x before kubeadm is default deployment, our templates for apiserver, controller, scheduler etc. needs to be updated with PriorityClasses PR #3361
  • [ ] Support fine grain binary sources other than hyperkube with kubeadm installs #3359

Please feel free to add comments and I'll update the list here.

help wanted

Most helpful comment

@woopstar Thanks for starting this!

Question I think we need to answer: How much variance do we actually need between ansible templated and kubeadm generated static pod manifests?

If it turns out we can use kubeadm to generate a manifest that will work in place for either provisioning path, could we retire the ansible templates and always use kubeadm? See: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-controlplane

That's how I'd like to approach this transformation effort across the project, phase by phase. In cases where there's no advantage to using an ansible native approach compared to an equivalent kubeadm phase, I think we should always favor kubeadm and remove redundant options.

My opinion is that we do not want variation. There should be no need to maintain two versions of the manifest files. If we want to alter something on them, then we should look back and see "why do we want to do so?" and then maybe add it as a PR to the kubeadm itself so we only maintain templates in one place.
Kubeadm does, in fact, create manifest files in Kubespray now when you deploy, so you only need to keep the kubeadm config files in "sync" and I like that a lot.

All 13 comments

Currently you're only able to go from a non-kubeadm deployment to a kubeadm deployment. We need to update the playbook so you can go from a kubeadm deployment to a non-kubeadm deployment

Why ?

@woopstar Thanks for starting this!

Question I think we need to answer: How much variance do we actually need between ansible templated and kubeadm generated static pod manifests?

If it turns out we can use kubeadm to generate a manifest that will work in place for either provisioning path, could we retire the ansible templates and always use kubeadm? See: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-controlplane

That's how I'd like to approach this transformation effort across the project, phase by phase. In cases where there's no advantage to using an ansible native approach compared to an equivalent kubeadm phase, I think we should always favor kubeadm and remove redundant options.

Currently you're only able to go from a non-kubeadm deployment to a kubeadm deployment. We need to update the playbook so you can go from a kubeadm deployment to a non-kubeadm deployment

Why ?

This was something @mattymo wanted before we switched.

@woopstar Thanks for starting this!

Question I think we need to answer: How much variance do we actually need between ansible templated and kubeadm generated static pod manifests?

If it turns out we can use kubeadm to generate a manifest that will work in place for either provisioning path, could we retire the ansible templates and always use kubeadm? See: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-controlplane

That's how I'd like to approach this transformation effort across the project, phase by phase. In cases where there's no advantage to using an ansible native approach compared to an equivalent kubeadm phase, I think we should always favor kubeadm and remove redundant options.

My opinion is that we do not want variation. There should be no need to maintain two versions of the manifest files. If we want to alter something on them, then we should look back and see "why do we want to do so?" and then maybe add it as a PR to the kubeadm itself so we only maintain templates in one place.
Kubeadm does, in fact, create manifest files in Kubespray now when you deploy, so you only need to keep the kubeadm config files in "sync" and I like that a lot.

Updated the checklist.

Since we seem to all agree that kubeadm only phases are the path forward:

  1. Do we want to get the ball rolling by making the kubeadm CLI download tasks non-optional?
  2. Should we always run the non-kubeadm -> kubeadm conversion tasks in roles/kubernetes/master and other roles and update the non-kubeadm default paths? This would allow us to always use the kubeadm conventions decribed here.

Updated the checklist.

Since we seem to all agree that kubeadm only phases are the path forward:

  1. Do we want to get the ball rolling by making the kubeadm CLI download tasks non-optional?
  2. Should we always run the non-kubeadm -> kubeadm conversion tasks in roles/kubernetes/master and other roles and update the non-kubeadm default paths? This would allow us to always use the kubeadm conventions decribed here.

I guess if we rename our manifests and files to match the scheme provided by kubeadm, the transitions between non-kubeadm and kubeadm is fairly easy as the files will just be overwritten with either of the configs?

a kubeadm deployment to a non-kubeadm deployment

I still don't have a clear answer to this.
I would skip that if it's too complex. as we probably don't want to maintain non-kubeadm after the migration.

a kubeadm deployment to a non-kubeadm deployment

I still don't have a clear answer to this.
I would skip that if it's too complex. as we probably don't want to maintain non-kubeadm after the migration.

I agree. But I guess @mattymo should answer to this as it was his initial demand.

Just let us know how we can help.

We need to test and verify the cloud provider are working as expected with kubeadm. I think that is the last part.

Can we check that scaling up the cluster (adding new master and/or node) with kubeadm works ?

Currently you're only able to go from a non-kubeadm deployment to a kubeadm

is this already handled? how and where?

Does anyone know how to convert a non-kubeadm installation to a kubeadm installation? Can't find any info. Now we use Kubernetes 1.16.8 with Kubespray fork with support of non-kubeadm: https://github.com/southbridgeio/kubespray

Was this page helpful?
0 / 5 - 0 ratings