Cluster-api: Document how providers should deploy shared CRDs

Created on 30 Oct 2018  路  11Comments  路  Source: kubernetes-sigs/cluster-api

With the move from AAs to CRDs there are now two sets of CRDs which must be deployed to create a functioning Cluster API cluster:

One way this has been done is by constructing a providercomponents.yaml containing both sets of CRDs. For example:

https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/f5be8acd8abdd64c9a29a5a72156a871d6c4e7a1/Makefile#L122

Since CRDs are not namespaced, only one provider can create the Cluster API CRDs, though multiple providers may apply the same CRDs. There are clusters which run multiple Cluster API providers so we need a documented a convention for how these CRDs should be deployed.

This issue was created from this comment.

kindocumentation kinfeature prioritimportant-soon

Most helpful comment

Assuming we proceed with removing actuators and move to a true split between generic cluster-api CRDs and controllers vs. provider CRDs and controllers, this issue probably becomes "document how to deploy cluster-api". Agree?

Yes

All 11 comments

/kind documentation

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Assuming we proceed with removing actuators and move to a true split between generic cluster-api CRDs and controllers vs. provider CRDs and controllers, this issue probably becomes "document how to deploy cluster-api". Agree?

/remove-lifecycle stale

Assuming we proceed with removing actuators and move to a true split between generic cluster-api CRDs and controllers vs. provider CRDs and controllers, this issue probably becomes "document how to deploy cluster-api". Agree?

Yes

cc myself

Agreed with the above points. No YAML should be deployed. The only thing providers (including CAPI) should do is make sure that the patches they provide at the tagged commit point to the correct image tag.

These can be consumed with kustomize's remote URLs feature.

This makes deploying a management cluster a 3 kustomize-command operation. This is a reasonable approach to getting started, clusterctl is probably the better thing to document to take a user from nothing to a pivoted management cluster on some cloud.

But questions that pop up for me:

  • What's the scope of this documentation? Is it from nothing to CAPI cluster?
  • Is clusterctl the recommended production deployment tool?

Another way that we can approach this is to publish the generated yaml as part of a release.

That would allow users to either:

  • Deploy the core provider components directly w/ kubectl create -f
  • Download the files to concatenate with additional yaml for deploying with clusterctl and a monolithic provider components

The benefit of this approach is that we do not require that the manager image patch file have any particular contents, and it would also insulate us from potential kustomize version skew requirements across multiple repos.

+馃挴 on what @detiber suggested, a familiar approach for Kubernetes users is probably going to be the best one :)

works for me.

Closing this in favor of updated quickstart docs.

Was this page helpful?
0 / 5 - 0 ratings