@jbeda I noticed that the feature has been opened after the agreed submission deadline (Oct 10). Is there any reason why this feature has to be added to 1.5 milestone?
@idvoretskyi I screwed up and I apologize. We were tracking all of the work in the the lifecycle SIG and met on Tuesday morning. Missed it by one day. Hopefully this isn't a blocker?
meta: I don't understand the 1.5 milestone deadline. If a feature get's dreamed up after the deadline, alpha implemented and merged before we cut 1.5 is it still a feature? I see the purpose of this repo as providing visibility to progress on large features. I don't see any purpose to a submission deadline. There's also no documentation of a deadline in this repo (when or why).
@mikedanese the purpose of the repo - is providing visibility and tracking on the product features, you are right. At the same time, during the release development process, we'd like to see the actual picture of the features that are going to be added to the product.
Feature submission deadline - is not the deadline for developing the feature; it's a deadline for declaring your desire to develop some functionality that you'd like to see in the released product. We are expecting from all the contributors to follow the release roadmap carefully to avoid unnecessary planning chaos.
PS. Thank you for mentioning the reflecting the deadline in the documentation - I've submitted a PR to add it to the release roadmap https://github.com/kubernetes/features/pull/133.
@jbeda as we have already agreed in the mailing thread [0], this is not a blocker.
cc @kubernetes/sig-cluster-lifecycle
Hmm, this didn't make v1.5, did it?
@jbeda @luxas any final agreement on this?
Pushing to 1.6
@jbeda can you confirm that this feature targets alpha for 1.6?
Should we add the alpha-in -1.6 label to this?
@apsinha I think so, updated.
I'm not sure whether we'll call it alpha or beta, it might be beta as well, if we count the kube-discovery method currently used in kubeadm as the alpha implementation
cc @jbeda @mikedanese
@luxas marked as alpha following the previous discussions. Let's discuss the appropriate feature stage if there are other suggestions.
I believe @mikedanese labeled the api objects as beta for 1.6.
@jbeda Are we planning to automatically enable the BootstrapSigner and the TokenCleaner in v1.6?
@luxas I don't think that we can as this is alpha. I'm happy turning this on by default for kubeadm clusters with the appropriate command line flags. After we show it works well with kubeadm we can turn them on for everyone.
@jbeda I'm ok with that, and that's what I supposed as well. Anyway, didn't we consider the kube-discovery
version of this work the alpha part and this version beta?
@jbeda please, provide us with the release notes and documentation PR (or links) at https://docs.google.com/spreadsheets/d/1nspIeRVNjAQHRslHQD1-6gPv99OcYZLMezrBe3Pfhhg/edit#gid=0
@lukemarsden Is pulling together release notes. I'm working on doc PR now.
We want to graduate Bootstrap Tokens to beta in v1.7.
Our TODO list for v1.7 is:
system:bootstrappers:
prefixsystem:bootstrappers:tls-bootstrap
(exact name TBD) group. The kubeadm init
token will have this group attached to it, but users that create bootstrap tokens for other purposes must explicitely set the token in that group in order to be able to use auto-approving.cc @jbeda @mikedanese @liggitt @deads2k ^
Based on some conversations with some security folks, it looks like we might want to enhance the way we do tokens to be asymmetric. Right now if a token leaks, it allows an attacker to not just join the cluster but also to MITM the bootstrap and MITM the API server. Scenario-wise, this is more of a problem as the tokens last a longer time.
Some context: https://kubernetes.slack.com/archives/C0EN96KUY/p1497967620498941
This will require a new design doc for the new scheme along with implementation in 1.8. It'll require updates to the bootstrap signer, cleaner and authorizer along with updates to kubeadm. Corresponding docs will need to be updated too.
/sig auth
This is targeted at v1.8, @jbeda, @mattmoyer, myself and some people from sig-auth is gonna work on getting this to beta.
In flight for v1.8 to get this feature to beta:
kubeadm join
to make the flow asymmetric: https://github.com/kubernetes/kubernetes/pull/49520 (related gdoc proposal) (assignee: @mattmoyer)kubeadm init
https://github.com/kubernetes/kubeadm/issues/343 (assignee: @mattmoyer)--experimental-bootstrap-token-auth
to --enable-bootstrap-token-auth
https://github.com/kubernetes/kubernetes/issues/50613 (assignee: @mattmoyer)--enable-bootstrap-token-auth
flag where supported https://github.com/kubernetes/kubeadm/issues/414 (assignee: @luxas)@jbeda @ericchiang @mattmoyer Please fill in here if there was something I forgot.
Enable the BootstrapSigner, TokenCleaner and BootstrapTokenAuthenticator by default
I would not expect the BootstrapTokenAuthenticator enabled by default... new authn/authz modes are opt-in
@liggitt I'm ok with that, just thought that the procedure was to enable authn/authz modules by default as well as the controllers.
Reflected the statement above, thanks!
@jbeda @luxas please, update the feature description with the new template https://github.com/kubernetes/features/blob/master/ISSUE_TEMPLATE.md
@idvoretskyi Updated with the new release note template
@jbeda PTAL and see if that looks good to you as well
All the checkboxes above are checked :sunglasses:, ready for beta!
Thanks @mattmoyer @wanghaoran1988!
What do we need to get bootstrap tokens to stable/GA in 1.9?
I'm ready to declare victory now. Are there any outstanding issues?
On Thu, Oct 12, 2017 at 8:53 AM Matt Moyer notifications@github.com wrote:
What do we need to get bootstrap tokens to stable/GA in 1.9?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/130#issuecomment-336181447,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACRvvxgaC1o13WB2Qmxywo2CLyQXr-Aks5srjYNgaJpZM4KT7wK
.
@jbeda Rate-limiting the public configmap :smile:
Things we can do to further lock this down:
I thought I saw someplace that there was a global rate limit for anonymous access but I can't find it now.
2 and 3 sound good but often times folks will use a LB that will rewrite the source that'll make this less than ideal.
@luxas @jbeda stable for 1.9, right?
The official label is beta in v1.9 (but it's kinda stable :smile:)
@jbeda :wave: Please indicate in the 1.9 feature tracking board
whether this feature needs documentation. If yes, please open a PR and add a link to the tracking spreadsheet. Thanks in advance!
@jbeda @mattmoyer I guess we could remove the v1.9 milestone here as nothing changed between v1.8 and v1.9 for this feature. We'll target GA in a future release.
@zacharysarah no new docs needed
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
@luxas @jbeda
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
This feature current has no milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Set the following:
Once this feature is appropriately updated, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
P.S. This was sent via automation
Hi
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
Enhancement issues opened in kubernetes/enhancements
should never be marked as frozen.
Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.
/remove-lifecycle frozen
@luxas Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP
Hello @luxas, I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet. This will also require a KEP to be included. If this feature has been abandoned please let us know.
Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.
Hi @luxas @roberthbailey , I'm the 1.16 Enhancement Lead/Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
Hey there @roberthbailey @luxas -- 1.17 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha/beta/stable in 1.17?
The current release schedule is:
If you do, I'll add it to the 1.17 tracking sheet (https://bit.ly/k8s117-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
We'll also need to convert the design proposal into a KEP. To be accepted in the release, all enhancements MUST have a KEP, the KEP MUST be merged, in an implementable state, and have both graduation criteria/test plan.
Thanks!
Please check with @timothysc / @justinsb.
Thanks @roberthbailey! @timothysc @justinsb ping on this? Enhancement freeze will be Tuesday, October 15, EOD PST
. Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/close
after the discussion in the kubeadm office hours with @timothysc we decided to close this ticket as out of date.
@neolit123: Closing this issue.
In response to this:
/close
after the discussion in the kubeadm office hours with @timothysc we decided to close this ticket as out of date.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.