How should CRDs that Kubernetes depends be deployed?
Kubernetes is moving towards a model where new API objects are not added to the core API but are instead defined and installed as CRDs. When core controllers depend on a CRD, some component in core must install and manage the CRD. Having the controller install it or using addon manager both have drawbacks. This issue tracks coming up with a solution that will address
Feature info:
@saad-ali we may also want to add more validation through external admission webhooks (preserving set and map properties in CSINodeInfo
fields, for example). @msau42 @jsafrane
kubernetes/community#1937 could benefit from such controller
/kind feature
/stage alpha
/assign @saad-ali
Feel free to reassign once an assignee is determined.
cc @kubernetes/api-approvers
Short term mitigation for SIG-Storage CRDs until this feature is ready:
Long term plan:
@saad-ali how much work is left for this to land in Alpha in 1.13 and how confident of getting this in for 1.13?
@saad-ali is there an update on how much work is left for making 1.13? Enhancement freeze is tomorrow COB. If there is no communication or update on the PR, this is going to be pulled from the milestone as it doesn't fit with our "stability" theme. If there is no communication after COB tomorrow, an exception will be required to add it back to the milestone. Please let me know where we stand. Thanks!
I sync'd with SIG API Machinery last week: they said to follow up with SIG Cluster Lifecycle. I have an agenda item on the SIG Cluster Lifecycle agenda tommorow (10/23) to discuss.
My bet is that this item will not get picked up for 1.13. Will update after syncing with SIG Cluster Lifecycle tomorrow.
Regardless, as far as SIG storage is concerned we will just use the existing add on manager to install the CRDs we need so we should be unblocked.
/cc @kubernetes/sig-cluster-lifecycle
/sig cluster-lifecycle
From the SIG Cluster meeting notes looks like it was agreed to move this out to 1.14. Is that right @saad-ali @roberthbailey @timothysc ? If so should we be tracking any other issues/PRs as a workaround? if so @saad-ali can you point to any pending work?
I spoke with Cluster Lifecycle. They agreed that the problem needs to be addressed, and will take ownership of the project long term. But they do not have the bandwidth to address it in this release. So we can remove this from the 1.13.
Short term recommendation was to use the addon manager. I will create a PR for that and link it back to this bug.
Removed the following "Requirements" from the description:
types.go
, if possible.These are issues that we need to think about when we modify Kubernetes components to use CRDs but not necessarily requirements for how the CRDs gets deployed on k8s.
/milestone clear
@kubernetes/sig-api-machinery-feature-requests @kubernetes/sig-storage-feature-requests @kubernetes/sig-architecture-feature-requests FYI discussed during SIG Cluster Lifecycle today, and they are not planning on prioritizing for this release cycle. Agree it's important, and it feels like it's roughly in the same arena as addon management, but there's no forcing function on their end.
First step seems like there needs to be a proper KEP that is run through SIG Arch
If you feel this needs to be a priority for this release cycle, please discuss
I think that before proceeding further with this and even starting to think about beta/GA for the related CSI CRDs, we need a KEP where we can discuss this in more detail. It's not clear to me how this will work, but I don't think this is a SIG Cluster Lifecycle issue, as this shall be handled by core as-is, not the ecosystem.
I'm the Enhancement Lead for 1.15. I don't see a KEP so it doesn't look like this feature can be tracked.
If something changes please PM me.
With respect to CSI, we unblocked ourselves from this issue by moving back to in-tree APIs.
Do @kubernetes/sig-api-machinery-feature-requests or @kubernetes/sig-cluster-lifecycle still want to keep this issue around?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Hi @saad-ali , I'm the 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If it's not graduating, I will remove it from the milestone and change the tracked label.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
sig-storage doesn't need this feature anymore. We don't have in-tree controllers that depend on CRDs.
/close
@msau42: Closing this issue.
In response to this:
sig-storage doesn't need this feature anymore. We don't have in-tree controllers that depend on CRDs.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I spoke with Cluster Lifecycle. They agreed that the problem needs to be addressed, and will take ownership of the project long term. But they do not have the bandwidth to address it in this release. So we can remove this from the 1.13.
Short term recommendation was to use the addon manager. I will create a PR for that and link it back to this bug.