/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Currently, the cluster-api can only create native k8s clusters but do not support other k8s distributions, like redhat openshift, IBM Cloud Private etc, does cluster-api has some plan to support let customer define what kind of k8s distribution cluster that they want to create.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/cc @vincepri @detiber
FYI @jichenjc @clyang82 @morvencao @xunpan
looks like openshift already have something, I don't know the detail though...
https://github.com/openshift/cluster-operator
Thanks @jichenjc, seems https://github.com/openshift/cluster-operator is trying to install openshift on AWS, but here I'm wondering can cluster-api provide a generic way to provision any k8s distributions in one cloud provider, am I missing anything?
@jichenjc https://github.com/openshift/cluster-operator is deprecated.
We use https://github.com/openshift/machine-api-operator to enable the machine API in a given cluster (i.e it makes the machine CRDs and controllers available in an existing cluster) and we plan to gradually adopt other parts of the upstream API as they get more matured. For OpenShift today the initial bootstrapping workflow is driven by https://github.com/openshift/installer ( there's no dependency on clusterctl).
There's a proposal for decoupling the API which would hopefully help to adopt it in a gradual fashion for different distributions https://docs.google.com/document/d/1pzXtwYWRsOzq5Ftu03O5FcFAlQE26nD3bjYBPenbhjg/edit#heading=h.vd1w04ud44q3
@gyliu513 this is one of the things that we are looking to address post-v1alpha1.
/milestone Next
@detiber does there are any discussion or documents that I can refer? Also want to check how can I contribute to this as well.
Since there have been many varied discussions and proposals around this (and other design topics) we wanted to start with gaining consensus around What Cluster API is and should be prior to diving too deep into particular proposals (start with high level alignment prior to trying to get low level alignment around design implementations).
This is something that we'll be discussing a the Cluster API meeting this week. Since the meeting time is not convenient for all contributors, we plan on having a broader discussion using the sig-cluster-lifecycle mailing list as well.
that make sense, thanks @detiber , look forward to the mail list discussion ;-)
@enxebre this is very helpful to me , and actually I am trying to contribute openshift on openstack as well :)
After more thinking, I think this task may belong to different cloud providers.
We can take OpenStack Cloud Provider as an example, this provider is using user-data to do post install, and the user-data will help to install Kubernetes cluster, so we may need to enhance user-data to support installing different Kubernetes distributions, like IBM Cloud Private, OpenShift, K3S etc.
@jichenjc @detiber @vincepri WDYT? Thanks!
FYI @xunpan
Seems clusterctl or machine spec still need to be enhanced to support specifying what kind of k8s distribution the end user want to install.
items:
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: liugya-master-
labels:
set: master
spec:
providerSpec:
value:
apiVersion: "openstackproviderconfig/v1alpha1"
kind: "OpenstackProviderSpec"
flavor: m1.xlarge
image: KVM-Ubt18.04-Srv-x64
sshUserName: cloudusr
keyName: cluster-api-provider-openstack
availabilityZone: nova
networks:
- uuid: e2d9ead6-759b-4592-873d-981d3db07c86
floatingIP: 9.20.206.22
securityGroups:
- uuid: 97acf9d4-e5bf-4fff-a2c0-be0b04fbc44b
userDataSecret:
name: master-user-data
namespace: openstack-provider-system
trunk: false
versions:
distribution: Kubernetes
kubelet: 1.14.0
controlPlane: 1.14.0
If we want to install IBM Cloud Private, it can be:
versions:
distribution: IBM Cloud Private
kubelet: 3.2
controlPlane: 3.2
Spec change would be as follows:
/// [MachineVersionInfo]
type MachineVersionInfo struct {
// Kubernetes Distribution
Distribution string `json:"distribution"`
// Kubelet is the semantic version of kubelet to run
Kubelet string `json:"kubelet"`
// ControlPlane is the semantic version of the Kubernetes control plane to
// run. This should only be populated when the machine is a
// control plane.
// +optional
ControlPlane string `json:"controlPlane,omitempty"`
}
actually, this is already included in the scope @gyliu513
Control plane:
Self-provisioned: A Kubernetes control plane consisting of pods or machines wholly managed by a single Cluster API deployment.
External: A control plane offered and controlled by some system other than Cluster API (e.g., GKE, AKS, EKS, IKS).
the proposed looks good, just curious whether the Kubelet version is still valid for external vendor?
for vendor they usually don't expose those info to end user/operator?
After more thinking, I think this task may belong to different cloud providers.
We can take OpenStack Cloud Provider as an example, this provider is using user-data to do post install, and the user-data will help to install Kubernetes cluster, so we may need to enhance user-data to support installing different Kubernetes distributions, like IBM Cloud Private, OpenShift, K3S etc.
@gyliu513 for v1alpha1, yes that is the case. However for v1alpha2+ we are looking at making the "bootstrapping config" more common (though still pluggable) rather than requiring each provider to implement their own.
@detiber thanks for the info. As I did not attend the Cluster API meeting, can you please share some info here:
1) Seems v1alpha1 (0.1.0) was now released, can I make some code change in master branch to enable v1alpha1 support this?
2) What is the plan of v1alpha2? You mentioned https://github.com/kubernetes-sigs/cluster-api/issues/853#issuecomment-476211206 we will have some discussion for this in google group sig-cluster-life-cycle, but I did not found such discussion there, am I missing anything? ;-)
@gyliu513 you can find more information about the post-v1alpha1 workstreams here: https://discuss.kubernetes.io/t/workstreams/5879/4
thanks for the info, this is really help to get a overall picture @detiber
We are working on a proposal that we'll share soon to separate machine infrastructure provisioning from node bootstrapping. That will allow users to pick one provider for infrastructure (e.g. IBM Cloud) and separate providers for bootstrapping (Kubernetes via kubeadm, OpenShift, Rancher, etc).
Thanks, will that be posted in Forum or somewhere soon? also, I assume that will be a set of provider related changes as well (not only in cluster-api itself) right?
We are working as quickly as we can to produce an initial draft of the proposal. At this point, I would expect it early next week. We will post to the discuss forum and Slack and anyplace else that makes sense.
Yes, this will require transforming "providers" as they exist in v1alpha1 from what they currently are (infrastructure & bootstrapping) into just infrastructure providers, and creating new bootstrap providers.
The proposal is #997.
The code in master now supports specifying a bootstrap provider separately from infrastructure provider.
/close
@ncdc: Closing this issue.
In response to this:
The code in
masternow supports specifying a bootstrap provider separately from infrastructure provider./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@gyliu513 this is one of the things that we are looking to address post-v1alpha1.
/milestone Next