As a developer, I would like to use CRD v1 for all Cluster API custom resources, so that I can take advantage of structural schemas, pruning unknown fields, validation, defaulting, and multiple versions with conversion webhooks.
This means all management clusters must be at least Kubernetes v1.16. Workload clusters are not subject to this minimum version requirement.
/kind feature
cc @zjs @ashoksekar07 for feedback
+1 from me.
Oof, this is a bit hard to digest, because we pivot everything. This would block our adoption of v1a3 until the cluster upgrade story was solid.
I'll think about this a bit more and see if we could make it work.
+1 as well
Sidenote: We'd need to wait for a Kubebuilder version that generates the correct CRDs and a controller-runtime update to be released on 1.16.
@rudoi & I chatted a little bit about a kube >1.16 requirement, and I think it's something that we can do. We have a need to get onto 1.16 with our most sensitive clusters in a near-term timeframe for other reasons, and we're already going to have to do that without v1alpha3 support.
And, while I hesitate to say it because it's totally a user contract violation re: alpha software, but our pivot-y workflow means we _could_ consider ignoring our <1.15 clusters from the v1alpha3 perspective.
Clarification: would this only affect v0.3.x releases or would this also affect v0.2.x patches?
@zjs It's scoped for the next version (v0.3) only.
@sethp-nr What do you mean ignore <1.15 clusters?
Whoops, I meant <=1.15 :) And "ignore" from the perspective of v1alpha3 mean that we treat them as a thing we EOL and migrate off of rather than continue to support long-term.
It's scoped for the next version (v0.3) only.
In that case, +1 from me.
+1 from me.
/area api
cc @akutz since I think you were suggesting waiting until 1.18 (or did I misinterpret your comment in Slack/the kubeadm defaulting issue?)
Hi @ncdc,
Thank you for pinging me. I was referring to the general agreement of n-2 versions for support. Since server-side defaulters were introduced in 1.15, it would mean that we shouldn't be making design changes to CAPI dependent upon server-side defaulters until 1.17 at the earliest. I said 1.18 because I know some people want to support at least the previous three versions of Kubernetes as well.
So while it's not the same issue, it's the same type of issue. I do feel that we shouldn't be making design changes in CAPI dependent upon a version of Kubernetes until that version is n-2/3 versions old.
I'm curious to know your reason for waiting - is it that you want some more soak time, or something else?
Hi @ncdc,
Perhaps I'm confusing something, but I thought the general, community consensus was that we should avoid design dependencies or changes that exclude versions of Kubernetes newer than n-2?
Hmm, I'm not aware of that. We avoid dependencies on alpha features, because they are disabled by default via feature gates, but once they move to beta, they are usually defaulted to on. @timothysc can you weigh in on Andrew's question?
By the way, I love what the proposal provides as a result of CRD v1. I was just raising the issue of when it's okay to require specific versions of Kubernetes newer than X. I don't feel strongly about it, and honestly I was just bringing it up because I thought it was how support/features were generally considered with respect to versions of Kubernetes.
Let me put it another way -- I in no way would attempt to block this or raise an issue with it if you decided to move ahead. If the group thinks it's fine, then it's fine with me. I'm certainly far less aware of the general pulse of the K8s community than some of you. :)
It's scoped for the next version (v0.3) only.
I only read the description and then your ping. I failed to see this clarification. This sounds good to me Andy. Thanks again.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/lifecycle frozen
I'm actually working on this, but getting blocked here and there
/lifecycle active
Most helpful comment
I only read the description and then your ping. I failed to see this clarification. This sounds good to me Andy. Thanks again.