Cluster-api: Move to a single manager watching all namespaces for each provider

Created on 11 May 2020  路  15Comments  路  Source: kubernetes-sigs/cluster-api

User Story

As a User, I would like to use clusterctl for creating multy-tenant clusters.

Detailed Description

https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/1713 is introducing the possibility for a provider to use many credentials with a single instance of a provider.

We should define if/how this scenario is supported by clusterctl.

Anything else you would like to add:

clusterctl already support two other different types of multy-tenancy, see https://cluster-api.sigs.k8s.io/clusterctl/commands/init.html#multi-tenancy

The approach introduced by CAPA is potentially by far simpler than the existing ones, and if we can potentially have all the providers to converge on the same approach this can result in a relevant simplification of manifests generation (e.g. no more need of the WebHook namespace) and of clusterctl (lots of corner cases won't be necessary anymore)

/kind feature

areclusterctl kinfeature lifecyclfrozen prioritimportant-longterm

Most helpful comment

Renamed this, hopefully it's going to be a bit more clear going forward :)

All 15 comments

/area clusterctl
even do this problem is not clusterctl specific

@randomvariable is working on this in the providers for v1alpha3, although we should revisit in v1alpha4 and forward. It's definitely something we might want to tackle before getting to beta.

/milestone Next

Thanks for this issue Fabrizio.

/priority important/long-term

@randomvariable: The label(s) priority/important/long-term cannot be applied, because the repository doesn't have them

In response to this:

Thanks for this issue Fabrizio.

/priority important/long-term

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/priority important-longterm

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

/lifecycle frozen

We need to add definitions for what multi-tenancy is to the glossary

And a provider contract. We should definitely put this in 0.4.0

/assign

/milestone v0.4.0

Renamed this, hopefully it's going to be a bit more clear going forward :)

Given that we are moving to a single manager watching all namespaces for each provider, I started to investigate possible cleanups/action items:

  • [ ] Run webhooks as part of the main manager #3822
  • [ ] Docs: document new definition of multi-tenancy (a single infrastructure provider supporting multiple credentials)
  • [ ] Docs: remove existing notes about previous models of multitenancy
  • [ ] inventory:

    • GetDefaultProviderVersion --> GetProviderVersion

    • GetDefaultProviderNamespace --> GetProviderNamespace

  • [ ] clusterctl delete:

    • Remove (or depreacate) the --namespace flag in clusterctl delete and in the corresponding library method

  • [ ] inventory/management group:

    • Remove ManagementGroupList

    • deriveManagementGroups should return ManagementGroup instead of ManagementGroupList; the func should error if there is more than 1 core provider or more that 1 provider instance for the same bootstrap_/control-plane/bootstrap provider

    • Remove checkOverlappingCoreProviders

    • checkOverlappingProviders should be simplified under the assumption there could be only 1 core provider

    • GetManagementGroups --> GetManagementGroup

  • [ ] installer:

    • simulateInstall should fail if there is more than 1 instance of the same provider

    • Validate should be adapted to the fact that there is only one management group

  • [ ] Upgrader:

    • Plan should return a single upgrade plan

    • ApplyPlan and ApplyCustomPlan should drop the core provider parameter given that there is only one management group in the cluster

  • [ ] clusterctl upgrade plan:

    • Should returns single plan

    • Should fail gracefully if run against a v1alpha3 cluster with more than one instance of the same providers.

  • [ ] clusterctl upgrade apply:

    • Should deprecate the --management-group flag and fail if the value provided does not match the existing management group

@fabriziopandini I can start creating individual stories for the tasks here and we can start adding in more details if needed.

Was this page helpful?
0 / 5 - 0 ratings