@calebamiles , this one is also target to 1.8; the tasks are traced at kubernetes/kubernetes#42001 .
@k82cn the feature submission deadline has passed (Aug 1). Please, submit a feature exception (https://github.com/kubernetes/features/blob/master/EXCEPTIONS.md) to have this feature present in 1.8 release.
@idvoretskyi - it's not appropriate exception template. The feature will be ready in time, and filing an exception for filing a feature description is not covered there.
@k82cn @gmarek it appears that this feature has been in progress for some time, but the feature repo issue was just filed late. Is that correct? In any case, we need you to complete the one-line description of the feature and add to the release notes draft so the user community can appreciate the great work you've done.
@jdumars - sure, this makes sense. Where is the release notes draft that we should add it to?
@gmarek this file https://github.com/kubernetes/features/blob/master/release-1.8/release_notes_draft.md
Thanks so much!
thanks; I'll follow up to draft a release note.
Should we add 1.8 milestone ?
@k82cn alpha or beta targeted for v1.8?
alpha release target is 1.8 :).
Perfect, thank you!
Is the work for this to go to alpha finished? I couldn't quite figure it out from reading https://github.com/kubernetes/kubernetes/issues/42001
@davidopp , all function code were done; I'll add e2e test early this week, and gmarek@ had created PR for doc. The task list in kubernetes/kubernetes#42001 shows what had done.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
/remove-lifecycle stale
We're going to trace alpha->beta in this issue.
Beta release target (1.11)
/sig node
@k82cn
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
/stage beta
/sig node
/kind feature
Thanks for the update!
@k82cn please fill out the appropriate line item of the
1.11 feature tracking spreadsheet
and open a placeholder docs PR against the
release-1.11
branch
by 5/25/2018 (tomorrow as I write this) if new docs or docs changes are
needed and a relevant PR has not yet been opened.
@k82cn Looks like we still need some docs to get this feature ready for release
https://github.com/kubernetes/features/issues/382
Could I please have your help with that? If thereβs anything I can do to assist please let me know
/milestone v1.12
@mistyhacks , @zparnold , after go through the status, there're still 4 PRs under review; it's hard to catch up 1.11. So I'd like to move this to next release.
@justaugustus @idvoretskyi FYI
@k82cn @kubernetes/sig-node-feature-requests @kubernetes/sig-scheduling-feature-requests --
This feature was removed from the previous milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Set the following:
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
/cc @justaugustus @kacole2 @robertsandoval @rajendar38
@justaugustus I'm helping @k82cn to promote this feature to beta in 1.12
Here is the info:
Upgrade TaintNodesByCondition to Beta #62109 records everything related with code changes and website changes.
website doc update: https://github.com/kubernetes/website/pull/9626
Thanks for the update! This has been added to the 1.12 Tracking sheet.
Hey there! @k82cn I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?
@zparnold here is the one kubernetes/website#9626
Thank you!
On Mon, Aug 20, 2018 at 1:30 PM Wei Huang notifications@github.com wrote:
@zparnold https://github.com/zparnold here is the one
kubernetes/website#9626 https://github.com/kubernetes/website/pull/9626β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/382#issuecomment-414452661,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AE81SCJ6lZJsTTNIaPN8cLoblHs_0pvPks5uSxxwgaJpZM4OtKV-
.
Hi folks,
Kubernetes 1.13 is going to be a 'stable' release since the cycle is only 10 weeks. We encourage no big alpha features and only consider adding this feature if you have a high level of confidence it will make code slush by 11/09. Are there plans for this enhancement to graduate to beta/stable within the 1.13 release cycle? If not, can you please remove it from the 1.12 milestone or add it to 1.13?
We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post. Please take the opportunity to develop a KEP.
@ameukam this feature has been promoted to beta in 1.12, and we don't plan to promote it to GA in 1.13.
So removing milestone label is fine.
/milestone clear
@Huang-Wei: You must be a member of the kubernetes/kubernetes-milestone-maintainers github team to set the milestone.
In response to this:
/milestone clear
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@kacole2 could you help to clear the milestone label? Thanks.
/milestone clear
Could you please change the design proposal link?
A working one:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/taint-node-by-condition.md
@gliush , thanks; updated :)
Will this be graduating to stable in v1.14?
Will this be graduating to stable in v1.14?
Considering 1.13's short release cycle , I'd like to graduate this feature to GA in 1.15 :)
Considering 1.13's short release cycle , I'd like to graduate this feature to GA in 1.15 :)
Do you mean "GA in 1.14"?
Do you mean "GA in 1.14"?
No, I'd like to hold GA this feature because of https://github.com/kubernetes/kubernetes/issues/72129 :)
@k82cn Hello, enhancement lead for 1.14 here - based on previous comments sounds like we don't want to track this feature for 1.14 but I wanted to confirm - Thanks
Hello @k82cn , I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet. This will also require a KEP for implementation.
Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.
This feature is currently in Beta.
@k82cn, I think we should consider graduating this feature to GA in 1.15. Is there any blocking item?
There's no blocking items, agree to graduate this to GA in 1.15.
/milestone v1.15
/stage stable
@k82cn @bsalamat I've added this to the tracking sheet. However, I don't see a KEP within the repo. Is there one that is currently in progress? The design proposal process has been phased out.
@k82cn Could you please write a graduation KEP for this feature?
My understanding is that, there are still scenarios where node-controller, scheduler and kubelet could race especially during updates, also, we don't have a test case that actually simulates when all 3(nodecontroller, scheduler and kubelet) are up and running. I believe we have to address them before graduating this feature to GA.
@ravisantoshgudimetla That's correct. I remembered that we wanted to apply the taint at updates as well, but we decided not to do it (at least in the first version). The race condition for update exists even without this feature. Node conditions (the feature that predates this one) has a the same race condition. It is the type of inevitable race conditions in large distributed systems. General solution for such race conditions is to tolerate and resolve them, as opposed to avoid them. K8s already has a mechanism to tolerate and resolve the race on node updates. We should decide whether further action is needed before graduating the feature.
I think before this feature, we were using the Node conditions
in scheduler where as now, we're are skipping those conditions and let taints be used for scheduling. You're right they would be eventually handled for example a pod might be scheduled to node before NotReady
taint has been applied by node controller and eventually nodecontroller might evict that pod because it's not having toleration for NotReady
taint.
I believe @k82cn had some reservations with the above approach of eventual consistency and wanted taints to be available in node.Status
or some other place to ensure that nodeconditions
are atomically updated along with taints on the node.(https://github.com/kubernetes/kubernetes/pull/72548#discussion_r245534752). Please correct me if I am wrong.
taints to be available in node.Status or some other place to ensure that nodeconditions are atomically updated along with taints on the node.
Yes, I used to have such a proposal; and I think that's a long term solution which we may have some other "items" in node.Status
. But for TaintNodeByCondition
, Bobby & Liggit's solution at https://github.com/kubernetes/kubernetes/issues/72129#issuecomment-455657542 is good enough. My suggestion is to graduate this feature to GA for its target user case, and start the disscussion for node.Status
thread as that's fundamental changes on tolerations/taints
or node condition, that maybe a long long discusstion there :).
@bsalamat @k82cn , Kubernetes 1.15 Enhancement Freeze is 4/30/2019. To be included in the Kubernetes 1.15 milestone, KEPs are required to be merged and in an "Implementable" state with proper test plans and graduation criteria. Please submit any PRs needed to make this KEP adhere to inclusion criteria. If this will slip from the 1.15 milestone, please let us know so we can make appropriate tracking changes.
@k82cn Do you think you will have the time to have a KEP for the graduation and work on this feature for 1.15?
@bsalamat @k82cn , Enhancement Freeze for Kubernetes 1.15 has passed and this did not meet the deadline. This is now being removed from the 1.15 milestone and the tracking sheet. If there is a need for this to be in 1.15, please file an Enhancement Exception. Thank you.
/milestone clear
Hi @k82cn @bsalamat , I'm a 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements. You can see the graduation criteria which is required information per the KEP Template.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
we'd like to promote this feature to GA in 1.17 :)
π hey there @k82cn -- 1.17 Enhancements lead here. I wanted to check in and see if you think this Enhancement will be graduating to stable in 1.17?
The current release schedule is:
If you do, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. π
Thanks!
Hi @k82cn, -- 1.17 enhancements shadow here.
Just a friendly reminder. We are just 5 days away from the Enhancements Freeze.
For the KEP to be considered for 1.17,
1.17 enhancements shadow here.
ack :)
It looks like a KEP was merged yesterday, and references previous tests. Is there anyway it can be updated to link to those or possibly a testgrid dashboard?
Also is there anything else that is going to be done to bump it to GA? or is more 'it's been stable enough that we have enough signal to promote to GA'?
Thanks!
Hi there, release 1.17 lead here. :)
I would like to second the concern about missing visibility into a testgrid dashboard for this. Can you please update the KEP with a link?
Hi there, release 1.17 lead here. :)
I would like to second the concern about missing visibility into a testgrid dashboard for this. Can you please update the KEP with a link?
I'll send the PR to update the KEP, thanks for the reminder.
Hello @k82cn I'm one of the v1.17 docs shadows.
Does this enhancement for (or the work planned for v1.17) require any new docs (or modifications to existing docs)? If not, can you please update the 1.17 Enhancement Tracker Sheet (or let me know and I'll do so).
If so, just a friendly reminder we're looking for a PR against k/website (branch dev-1.17) due by Friday, November 8th, it can just be a placeholder PR at this time. Let me know if you have any questions!
Hello @mrbobbytales I'm one of the v1.17 docs shadows.
Does this enhancement for (or the work planned for v1.17) require any new docs (or modifications to existing docs)? If not, can you please update the 1.17 Enhancement Tracker Sheet (or let me know and I'll do so).If so, just a friendly reminder we're looking for a PR against k/website (branch dev-1.17) due by Friday, November 8th, it can just be a placeholder PR at this time. Let me know if you have any questions!
I created https://github.com/kubernetes/website/pull/17073 to update the documents, but I'm not sure whether it is all the documents we need to update.
I'm also not sure about that either. cc @k82cn In any case you know about this taint nodes by condition..
@draveness @k82cn Can you confirm if this is the final doc? - https://github.com/kubernetes/website/pull/17073
Hey there π. Jeremy from the 1.17 enhancements team here. Code freeze is coming up quickly (Nov 14) and I wanted to touch base to see how this was going. Are there any in progress k/k PRs that we can track for the work getting this to GA? I see docs PRs associated with this, are there any other PRs we should be tracking for the release?
Hey there π. Jeremy from the 1.17 enhancements team here. Code freeze is coming up quickly (Nov 14) and I wanted to touch base to see how this was going. Are there any in progress k/k PRs that we can track for the work getting this to GA? I see docs PRs associated with this, are there any other PRs we should be tracking for the release?
Here is the PR for the code change https://github.com/kubernetes/kubernetes/pull/82703 and related issue https://github.com/kubernetes/kubernetes/issues/82635
Hey @draveness @k82cn, Happy New Year! 1.18 Enhancements lead here π Thanks for getting this across the line in 1.17!!
I'm going though and doing some cleanup for the milestone and checking on things that graduated in the last release. Since this graduated to GA in 1.17, I'd like to close this issue out but the KEP is still marked as implementable. Could you submit a PR to update the KEP to implemented
and then we can close this issue out?
Thanks so much!
Hey @draveness @k82cn, Happy New Year! 1.18 Enhancements lead here π Thanks for getting this across the line in 1.17!!
I'm going though and doing some cleanup for the milestone and checking on things that graduated in the last release. Since this graduated to GA in 1.17, I'd like to close this issue out but the KEP is still marked as implementable. Could you submit a PR to update the KEP to
implemented
and then we can close this issue out?Thanks so much!
This feature has already graduated to GA, I think we could close it now.
/close
@draveness: Closing this issue.
In response to this:
Hey @draveness @k82cn, Happy New Year! 1.18 Enhancements lead here π Thanks for getting this across the line in 1.17!!
I'm going though and doing some cleanup for the milestone and checking on things that graduated in the last release. Since this graduated to GA in 1.17, I'd like to close this issue out but the KEP is still marked as implementable. Could you submit a PR to update the KEP to
implemented
and then we can close this issue out?Thanks so much!
This feature has already graduated to GA, I think we could close it now.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@draveness yes, I was just asking if you could update the status of the KEP :) It's still marked as implementable
, but should be updated to say implemented
...since it was :)
@draveness yes, I was just asking if you could update the status of the KEP :) It's still marked as
implementable
, but should be updated to sayimplemented
...since it was :)
Thanks for the reminder, #1431 has been merged,
Most helpful comment
@zparnold here is the one kubernetes/website#9626