Cluster-api: Define constraints for upgrades crossing ClusterAPIVersion (e.g. v1alpha2-->v1alpha3)

Created on 3 Jan 2020  路  12Comments  路  Source: kubernetes-sigs/cluster-api

User Story

As an operator, I would like to upgrade my management cluster to the ClusterAPIVersion for taking advantage of the new features

Detailed Description

This is the first time in cluster API that we are supporting upgrades crossing ClusterAPIVersion e.g. v1alpha2-->v1alpha3, or more generically from CAPI-vCurrent to CAPI-vNext.

Each provider is implementing its own bits, like e.g. conversion webhooks, but AFAIK there is not yet a defined procedure/recommendations about how this should work across all the providers.

E.g.
It is possible to upgrade* each provider independently from the others or:

  • there should be some order like e.g. upgrade CAPI first, then the other providers in any order/at any time
  • there should be some orchestration, like e.g. all the provider should upgrade from supporting CAPI-vCurrent to supporting CAPI-vNext at the same time

[*] from a vX, supporting CAPI-vCurrent, to vY, supporting CAPI-vNext

E.g.
What about multi-tenant management clusters:
Case 1: one core/bootstrap provider, many instances of the same infra provider

  • it is possible to upgrade each instance of the infra-provider independently from the others?

Case 2: many separated instances of core/bootstrap/infra providers

  • it is possible to upgrade each separated of core/bootstrap/infra providers independently from the others?

This issue is intended to help in identifying/discussing a shared vision on this topic.

Anything else you would like to add:
As a personal consideration, I would like to keep the version skew matrix as simple as possible because:

  • we are still in alpha
  • there is a wide range of interactions across providers that can be affected by the version skew
  • there are 3/4 providers is the simplest management cluster, meaning 8 or 16 version skew combinations, but there can be many more in multi-tenancy scenarios
  • all the infrastructure for E2E testing of upgrades should be implemented yet and limiting the number of supported scenarios greatly helps in getting started.

/kind feature
/area clusterctl

/cc @vincepri @detiber @akutz @chuckha @ncdc

areclusterctl kinfeature lifecyclactive prioritimportant-soon

Most helpful comment

I'm aligned with your opinions, 1 is safer, 2 is more flexible.

I guess the effort for implementing both is not too far from implementing only one, so my proposal is:

  • to keep the output of upgrade plan as in option 1, so we are driving the user towards this approach
  • on upgrade apply I will support also the --provider flag, so there is a backdoor for when flexibility is required

All 12 comments

@fabriziopandini: The label(s) area/ cannot be applied, because the repository doesn't have them

In response to this:

User Story

As an operator, I would like to upgrade my management cluster to the ClusterAPIVersion for taking advantage of the new features

Detailed Description

This is the first time in cluster API that we are supporting upgrades crossing ClusterAPIVersion e.g. v1alpha2-->v1alpha3, or more generically from CAPI-vCurrent to CAPI-vNext.

Each provider is implementing its own bits, like e.g. conversion webhooks, but AFAIK there is not yet a defined procedure/recommendations about how this should work across all the providers.

E.g.
It is possible to upgrade* each provider independently from the others or:

  • there should be some order like e.g. upgrade CAPI first, then the other providers in any order/at any time
  • there should be some orchestration, like e.g. all the provider should upgrade from supporting CAPI-vCurrent to supporting CAPI-vNext at the same time

[*] from a vX, supporting CAPI-vCurrent, to vY, supporting CAPI-vNext

E.g.
What about multi-tenant management clusters:
Case 1: one core/bootstrap provider, many instances of the same infra provider

  • it is possible to upgrade each instance of the infra-provider independently from the others?

Case 2: many separated of core/bootstrap/infra providers

  • it is possible to upgrade each separated of core/bootstrap/infra providers independently from the others?

This issue is intended to help in identifying/discussing a shared vision on this topic.

Anything else you would like to add:
As a personal consideration, I would like to keep the version skew matrix as simple as possible because:

  • we are still in alpha
  • there is a wide range of interactions across providers that can be affected by the version skew
  • there are 3/4 providers is the simplest management cluster, but there can be many more in multi-tenancy scenarios
  • all the infrastructure for E2E testing of upgrades should be implemented yet and limiting the number of supported scenarios greatly helps in getting started.

/kind feature
/area clusterctl

/cc @vincepri @detiber @akutz @chuckha @ncdc

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini: The label(s) area/ cannot be applied, because the repository doesn't have them

In response to this:

User Story

As an operator, I would like to upgrade my management cluster to the ClusterAPIVersion for taking advantage of the new features

Detailed Description

This is the first time in cluster API that we are supporting upgrades crossing ClusterAPIVersion e.g. v1alpha2-->v1alpha3, or more generically from CAPI-vCurrent to CAPI-vNext.

Each provider is implementing its own bits, like e.g. conversion webhooks, but AFAIK there is not yet a defined procedure/recommendations about how this should work across all the providers.

E.g.
It is possible to upgrade* each provider independently from the others or:

  • there should be some order like e.g. upgrade CAPI first, then the other providers in any order/at any time
  • there should be some orchestration, like e.g. all the provider should upgrade from supporting CAPI-vCurrent to supporting CAPI-vNext at the same time

[*] from a vX, supporting CAPI-vCurrent, to vY, supporting CAPI-vNext

E.g.
What about multi-tenant management clusters:
Case 1: one core/bootstrap provider, many instances of the same infra provider

  • it is possible to upgrade each instance of the infra-provider independently from the others?

Case 2: many separated instances of core/bootstrap/infra providers

  • it is possible to upgrade each separated of core/bootstrap/infra providers independently from the others?

This issue is intended to help in identifying/discussing a shared vision on this topic.

Anything else you would like to add:
As a personal consideration, I would like to keep the version skew matrix as simple as possible because:

  • we are still in alpha
  • there is a wide range of interactions across providers that can be affected by the version skew
  • there are 3/4 providers is the simplest management cluster, but there can be many more in multi-tenancy scenarios
  • all the infrastructure for E2E testing of upgrades should be implemented yet and limiting the number of supported scenarios greatly helps in getting started.

/kind feature
/area clusterctl

/cc @vincepri @detiber @akutz @chuckha @ncdc

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini: The label(s) area/ cannot be applied, because the repository doesn't have them

In response to this:

User Story

As an operator, I would like to upgrade my management cluster to the ClusterAPIVersion for taking advantage of the new features

Detailed Description

This is the first time in cluster API that we are supporting upgrades crossing ClusterAPIVersion e.g. v1alpha2-->v1alpha3, or more generically from CAPI-vCurrent to CAPI-vNext.

Each provider is implementing its own bits, like e.g. conversion webhooks, but AFAIK there is not yet a defined procedure/recommendations about how this should work across all the providers.

E.g.
It is possible to upgrade* each provider independently from the others or:

  • there should be some order like e.g. upgrade CAPI first, then the other providers in any order/at any time
  • there should be some orchestration, like e.g. all the provider should upgrade from supporting CAPI-vCurrent to supporting CAPI-vNext at the same time

[*] from a vX, supporting CAPI-vCurrent, to vY, supporting CAPI-vNext

E.g.
What about multi-tenant management clusters:
Case 1: one core/bootstrap provider, many instances of the same infra provider

  • it is possible to upgrade each instance of the infra-provider independently from the others?

Case 2: many separated instances of core/bootstrap/infra providers

  • it is possible to upgrade each separated of core/bootstrap/infra providers independently from the others?

This issue is intended to help in identifying/discussing a shared vision on this topic.

Anything else you would like to add:
As a personal consideration, I would like to keep the version skew matrix as simple as possible because:

  • we are still in alpha
  • there is a wide range of interactions across providers that can be affected by the version skew
  • there are 3/4 providers is the simplest management cluster, meaning 8 or 16 version skew combinations, but there can be many more in multi-tenancy scenarios
  • all the infrastructure for E2E testing of upgrades should be implemented yet and limiting the number of supported scenarios greatly helps in getting started.

/kind feature
/area clusterctl

/cc @vincepri @detiber @akutz @chuckha @ncdc

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ncdc @vincepri
I'm going to implement this assuming we are not going to allow clusterctl users to mix v1alphaX and v1alphaY within the same stack of providers.
/assign
/lifecycle active

Sounds like a fair assumption to me

Question about UX for clusterctl upgrade plan & upgrade apply

Option 1:
upgrade apply proposes one (or more) upgrade targets and the user can only upgrade apply it as a whole by copying the generated command

Checking new release availability...

Management group: capi-system/cluster-api, latest release available for the v1alpha3 Cluster API version:

NAME                NAMESPACE                       TYPE                     CURRENT VERSION   TARGET VERSION
kubeadm-bootstrap   capi-kubeadm-bootstrap-system   BootstrapProvider        v0.3.0            v0.3.1
cluster-api         capi-system                     CoreProvider             v0.3.0            v0.3.1
docker              capd-system                     InfrastructureProvider   v0.3.0            v0.3.1


You can now apply the upgrade by executing the following command:

   upgrade apply --management-group capi-system/cluster-api  --cluster-api-version v1alpha3

Option 2:
upgrade apply proposes one (or more) upgrade targets and the user can upgrade apply it as a whole by copying the generated command or "cherry-pick" upgrade for a subset of providers by editing the generated command

Checking new release availability...

Management group: capi-system/cluster-api, latest release available for the v1alpha3 Cluster API version:

NAME                NAMESPACE                       TYPE                     CURRENT VERSION   TARGET VERSION
kubeadm-bootstrap   capi-kubeadm-bootstrap-system   BootstrapProvider        v0.3.0            v0.3.1
cluster-api         capi-system                     CoreProvider             v0.3.0            v0.3.1
docker              capd-system                     InfrastructureProvider   v0.3.0            v0.3.1


You can now apply the upgrade by executing the following command:

   upgrade apply --management-group capi-system/cluster-api \
         --provider capi-kubeadm-bootstrap-system/kubeadm-bootstrap:v0.3.1 \
         --provider capi-system/cluster-api:v0.3.1 \
         --provider capd-system/docker:v0.3.1

e.g. of cherry-pick -> upgrade only the kubeadm-bootstrap provider:

   upgrade apply --management-group capi-system/cluster-api \
         --provider capi-kubeadm-bootstrap-system/kubeadm-bootstrap:v0.3.1

I honestly like both approaches, the first one is simple and straightforward, and communicates that Cluster API works in a group, but the second one let's folks do their own custom thing if they need to.

Which one is the easiest to implement? We can make sure the implementation is generic enough so later on we can add functionality to do the other one.

I'm definitely in favor of option 1 with the caveat that it reduces the number of supported use cases, but that might be a feature of the option.

I prefer option 1 because it would be much harder for the user to upgrade into an unsupported version matrix since clusterctl manages that complexity for them and we are still working out how to version things to make sense to the user.

I'm curious about what your (@fabriziopandini) thoughts are to each approach. Are they both equal to you or do you favor one over the other?

I like the first one because it's simple, and I like the second one because it lets you decide which providers to upgrade. +1 to Vince's question - can we have both? 馃槃

If I have to pick, I'd pick option 1, though.

I'm aligned with your opinions, 1 is safer, 2 is more flexible.

I guess the effort for implementing both is not too far from implementing only one, so my proposal is:

  • to keep the output of upgrade plan as in option 1, so we are driving the user towards this approach
  • on upgrade apply I will support also the --provider flag, so there is a backdoor for when flexibility is required

That plan sounds good to me, although the second option can be added later IMO, and we can have the action item in another issue in the next milestone.

Was this page helpful?
0 / 5 - 0 ratings