/sig aws
/assign @leakingtapan @d-nishi @bertinatto
/sig storage
Hi @d-nishi and @leakingtapan, I'm Enhancements Shadow for 1.13. Could you please update the release team with the progression of this enhancement for the 1.13 release?
Code slush begins on 11/9 and code freeze is 11/15.
Thank you!
@ameukam --
Is this data enough?
/kind feature
@d-nishi - Hello I'm the enhancements lead for 1.14 - it looks like this feature is targeting beta in 1.14 can you confirm if that is correct? We want all enhancements to have a KEP, it looks like this issues KEP is here -https://github.com/kubernetes/enhancements/blob/master/keps/sig-aws/20181127-aws-ebs-csi-driver.md - let me know if that is not the correct KEP. Thanks.
That's correct. I will amend the KEP with Beta changes
@claurence Here is the Beta release KEP.
@claurence @leakingtapan @d-nishi I hate to keep hammering on this, but I don't think CSI drivers should be tracked as part of the Kubernetes core release. The whole point of CSI was to decouple plugin development from the development of core Kubernetes. There are so many CSI drivers. None of them have code shipping as part of k8s 1.14, and nothing that the release team needs to do for these drivers.
Therefore I suggest closing these issues or moving them out of the enhancements repo to somewhere else:
@saad-ali if I'm understanding correctly are you saying this issue should be removed from the milestone and be closed?
@claurence I will discuss this with @saad-ali and revert.
@saad-ali if I'm understanding correctly are you saying this issue should be removed from the milestone and be closed?
Yes
@claurence I will discuss this with @saad-ali and revert.
Happy to chat!
@saad-ali @d-nishi - checking on this issue - was it decided that is no longer needs to be tracked for 1.14?
If it's still being tracked for 1.14 - the KEP is marked as "provisional" - what additional work is needed for this KEP to be "implementable"?
@claurence @saad-ali and I agreed to open this issue for open discussion with SIG release members
This has been discussed with @justaugustus and @spiffxp
@spiffxp will tell us the appropriate approach for 1.14 release and we can formalize the approach in 1.15 based on what I was told last.
/milestone clear
I agree with @saad-ali the release team shouldn't track this for v1.14. I appreciate @d-nishi erring on the side of transparency here.
I'm of the opinion that release notes for kubernetes v1.14 shouldn't be talking about code that has landed out of tree. At present, a "release" only consists of bits that were cut from kubernetes/kubernetes.
However it would be useful for end users to know what out of tree subprojects / plugins / etc. have been released in the interim. This seems like appropriate content for the kubernetes blog. WDYT @nwoods3 @kbarnard10
Hello, I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet.
Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Hi @leakingtapan @d-nishi , I'm the 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/close
since csi drivers are tracked out of tree by storage providers
@leakingtapan: Closing this issue.
In response to this:
/close
since csi drivers are tracked out of tree by storage providers
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
That's correct. I will amend the KEP with Beta changes