What steps did you take and what happened:
In KubeadmControlPlane section in the book, it is stated that recreating machine template is required for upgrade:
Since MachineTemplate resources are immutable, the recommended approach is to
I can update DockerMachineTemplate.
/kind bug
This is definitely strange, let's fix that up!
/milestone v0.3.4
/help
@vincepri:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
This is definitely strange, let's fix that up!
/milestone v0.3.4
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Is this an issue against KubeadmControlPlane or against cluster-api-provider-vsphere, which isn't enforcing that VSphereMachineTemplate is immutable?
It seems to me that KubeadmControlPlane is operating as expected here by detecting the Spec.Version change, but not the change to the VSphereMachineTemplate.
What Jason's describing matches my understanding: providers SHOULD make their templates immutable, because CAPI will treat them as such by not inspecting them to see if they've changed.
We have gotten ourselves out of a pickle at least once by temporarily turning off immutability (by deleting the webhook config), making a change, and then causing the KCP into doing an upgrade, as you describe, but that certainly felt like an "at our own risk" activity to us and not something CAPI supported.
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/636 should solve this for CAPV
closing this as dupe
/close
@yastij: Closing this issue.
In response to this:
closing this as dupe
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Reopening this for CAPD. There is not much to change in a DockerMachineTemplate though as only docker.sock is in the template.
/reopen
@sedefsavas: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This requires getting web-hooks in places for CAPD...
/milestone v0.3.x
/priority backlog
Will Docker provider webhook be a separate deployment as the rest of the providers? Or can we bundle it together with manager?
cc @fabriziopandini
/assign
Since docker provider does not have conversion webhooks, no need to have a separate deployment for the webhook. So will add the webhook to the docker controller manager.
@sedefsavas
Are there any blockers to get it implemented in the same way as all the other providers?
I really would like to have CAPD to be as much as possible like all the other providers, otherwise, we are increasing the risk to get false signals in our dev/e2e workflow.
e.g. clusterctl assumes all the providers to follow the same contract for yaml files (see https://cluster-api.sigs.k8s.io/developer/providers/v1alpha2-to-v1alpha3.html#refactor-kustomize-config-folder-to-support-multi-tenancy-when-using-webhooks); if we allow CAPD to drift from this standard this might lead to some errors/false positive/false negative when using clusterctl with docker in the e2e tests...
Also, with fresh memories of all the effort done by @vincepri for defining the struct for our config folders, I'm not super happy to introduce a different struct specific for CAPD.
@fabriziopandini CAPD doesn't need multi-tenancy or conversion webhooks, I'm fine having the validation/defaulting webhooks live in the same controller for now, it doesn't really matter where they live and they could always be moved later on.