/kind feature
/milestone v1.13
https://github.com/orgs/kubernetes/projects/5
kind/kep
label for [k/community] and [k/features]k/community
:kind/kep
org:kubernetes label:kind/kep
, so we can identify active PRs to k/community
and reroute the PR authors to k/enhancements
(depending on the state)k/enhancements
(fka k/features
):kind/kep
kind/kep
, differentiating them from kind/feature
kind/kep
from [k/community] once KEP migration is completekeps/
(KEPs)design-proposals/
(historical design proposals from https://git.k8s.io/community/contributors/design-proposals)arch[itecture]|design/
(design principles of Kubernetes, derived from reorganizing https://git.k8s.io/community/contributors/devel, mentioned here)/assign @justaugustus @calebamiles @jdumars
/stage beta
...because the label exists now:
/kind kep
:)
Why not go all the way and move them to a database? That's pretty clearly where we're eventually going. And it would solve the autonumbering problem.
This sort of falls in-line with trying to use more tooling / programmatically interact with the KEPs better, but there are many small inconsistencies with the current KEPs beyond multiple KEPs using the same number. Some will omit attributes if empty, others will put 'N/A' or TBD. Others will sort of follow their own schema.
For tooling (and the eventual KEP website), having consistently formatted data will make it much easier to display everything correctly. Some of this is enumerated and spelled out in 0001a-meta-kep-implementation.md, but I wanted to include some examples of the inconsistencies encountered so far in attempting to list and sort the KEPs for the WIP contributor site.
Having a clearly defined schema and cleaning up the current data will make working with the information much easier.
/milestone clear
I'd like for us to consider milestoning keps into something other than v1.14, just so I can keep that milestone focused on enhancements that will land as part of kubernetes. ie: totally fine with a "kep" milestone, or "keps-q1" milestone etc.
We need to think about project capacity and backlog management, also. The reality is that we don't have the capacity to do all worthwhile things at the same time.
@bgrant0607 -- I've created a project tracking board for this some time ago: https://github.com/orgs/kubernetes/projects/5
We've also identified several PMs in this week's SIG PM call interested in assisting with this effort.
@spiffxp -- created keps-beta
and keps-ga
milestones.
/milestone keps-beta
@justaugustus I was referring to the backlog of KEPs in flight (per SIG or per subproject) rather than the backlog for implementing the KEP process.
SIG Arch was maintaining a review backlog manually:
https://github.com/kubernetes-sigs/architecture-tracking/projects/2
I just left a comment on https://github.com/kubernetes/enhancements/issues/619 about this
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
Enhancement issues opened in kubernetes/enhancements
should never be marked as frozen.
Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.
/remove-lifecycle frozen
Hi @justaugustus, @jdumars, & @calebamiles I'm the 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
Hey @rbitia! No code is landing in k/k. It's tracking a process change, so no need to track this one.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Why is this still open? Is there more work?
Looking over the points, it looks like much of this is done. (Perhaps checkboxes could be clearer)
Would this graduate the KEP process from _beta_?
Hey @vbatts! :)
We've still got some work to do here before I'd consider the process stable
:
stable
That said, you're right, we should get this issue updated! Will handle.
/unassign @calebamiles @jdumars
/assign @mrbobbytables @jeremyrickard @johnbelamaric
Right on!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale