This feature aims at extending the current pod specification with support
for namespaced kernel parameters (sysctls) set for each pod.
_FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers._
FEATURE_STATUS: IN_DEVELOPMENT
More advice:
Design
Coding
Docs
@kubernetes/docs here are the sysctl docs: https://github.com/kubernetes/kubernetes.github.io/pull/1126
/cc @kubernetes/feature-reviewers
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
There are a number of people using sysctls now. I have not heard any issues with them.
I suggest to promote the current API (transformed to native fields in the PSP and on pods) to beta for 1.11.
@jeremyeder @vishh @derekwaynecarr @php-coder
@kubernetes/sig-node-api-reviews
Thanks, @sttts!
@sttts it needs a feature gate.
from node side, it would be @sjenning who could help push this in sig-node. will sync w/ @dchen1107 next week. we discussed this briefly in last weeks sig-node.
@derekwaynecarr in the kubelet not much would change code-wise. But of course we need a "go" from the node team that they think using sysctls is safe enough for beta. Note, that graduation to beta does not say anything about extending the list of safe sysctls.
It's already feature gated. As beta we would switch the default to true. Doesn't look like we had a feature gate https://github.com/sjenning/kubernetes/commit/f4f722011d98de77a36cfd0b38c6dd03f421e08a
@sttts
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
@sttts Do we need to wait until pod annotations become fields or it doesn't block us from graduating it to beta?
@sttts Do we need to wait until pod annotations become fields or it doesn't block us from graduating it to beta?
yes, they need to become fields
@php-coder @liggitt so just to clarify, no work planned for 1.11?
Also, would you mind updating the description to fit the new feature description template?
@justaugustus promotion to beta is discussed in sig-node /cc @derekwaynecarr
/remove-lifecycle stale
@justaugustus - per sig-node planning, goal is to promote to beta.
I have updated assignees with those doing development and review.
@derekwaynecarr thanks for the update!
Working on the KEP for the graduation here: https://github.com/kubernetes/community/pull/2093
Are there also plans to include more sysctls in the safe set as part of this? My company would definitely make use of the ability to set net.ipv4.tcp_keepalive_time, tcp_keepalive_intvl, and tcp_keepalive_probes on a per-pod basis.
Example use: Java applications that depend on TCP keepalive, but which rely on the standard Socket class, can turn keepalive on with that class, but can't set those three parameters.
There are also 2 open PRs for adding more safe sysctls: https://github.com/kubernetes/kubernetes/pull/54896 and https://github.com/kubernetes/kubernetes/pull/55011
@twilfong compare my comment https://github.com/kubernetes/kubernetes/pull/54896#issuecomment-344541244. We are open to adding more sysctls to the safe set, but we need a kernel source analysis why it is safe. Note that also unsafe sysctls can be used, but they must be whitelisted in the kubelet.
Thanks @php-coder and @sttts.
@sttts: I've read your comment and read through https://github.com/kubernetes/community/pull/700/files#diff-0e864ea85fc8d72b3bd0b0f39c34d143R342 and understand the basic requirements for whitelisting.
I have verified that the three net.ipv4.tcp_keepalive_* parameters are namespaced in net ns, but have not done an analysis to find if the memory resources caused by the sysctl are accounted for by the associated cgroup.
My guess is that this should meet the bar of not causing harm to the node or other containers on the same node where the pod with changed kernel parameter is run, since the keepalive parameters only control the timing of keepalive probes and when the socket is closed. (e.g. there should be no difference in memory allocation for any given socket, regardless of how these parameters are set.)
What is the recommended way to move forward with this? Should my team do a more deep analysis and then submit a pull request for a commit touching pkg/kubelet/sysctl/whitelist.go and pkg/kubelet/sysctl/whitelist_test.go? Or is there a different (better) recommended way to go about this?
@twilfong I would suggest to add a convincing discussion to the proposal in the community repo for documentation and a counter part PR in k/k against the whitelist. @sjenning @derekwaynecarr @vishh are the ones who can review this.
Promotion of annotations to API fields PR: https://github.com/kubernetes/kubernetes/pull/63717
@sttts please fill out the appropriate line item of the
1.11 feature tracking spreadsheet
and open a placeholder docs PR against the
release-1.11
branch
by 5/25/2018 (tomorrow as I write this) if new docs or docs changes are
needed and a relevant PR has not yet been opened.
@ingvagabund ^^
Feature issues opened in kubernetes/features
should never be marked as frozen.
Feature Owners can ensure that features stay fresh by consistently updating their states across release cycles.
/remove-lifecycle frozen
@sttts This feature was worked on in the previous milestone, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.12 since there is nothing in the original post.
If there are any updates, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
@kacole2 there is nothing planned to my knowledge in 1.12 about this feature. /cc @derekwaynecarr @ingvagabund @sjenning
Thanks for the update, @sttts!
Can you modify this issue description to match the issue template?
@justaugustus @derekwaynecarr we need an owner of this feature. Is it sig-node?
@sttts -- Based on the comment history, looks like this belongs to SIG Node & @derekwaynecarr.
Happy to chase people down if that isn't sufficient though.
Hi
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Hello @sttts @krmayankk , I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet. As usual, a formal KEP will need to be merged for this to be included in 1.15. The KEP that @ingvagabund created at kubernetes/community#2093 needs to be migrated.
Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.
Nothing planned here as far as I know.
Hi @sttts @krmayankk , I'm the 1.16 Enhancement Lead. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
Hello @sttts @sjenning @derekwaynecarr @ingvagabund, 1.17 Enhancement Shadow here! 🙂
I wanted to reach out to see *if this enhancement will be graduating to alpha/beta/stable in 1.17?
*
Please let me know so that this enhancement can be added to 1.17 tracking sheet.
Thank you!
🔔Friendly Reminder
A Kubernetes Enhancement Proposal (KEP) must meet the following criteria before Enhancement Freeze to be accepted into the release
implementable
stateAll relevant k/k PRs should be listed in this issue
I am not aware of a graduation.
@sttts Thank you for letting me know, I will remove this from v1.17 release 👍
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Hey there @sttts -- 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?
In order to have this part of the release:
The current release schedule is:
If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
Thanks!
Hi there @sttts , @derekwaynecarr ,
Kind reminder about my question above.
Regards,
Mirek
Hi there @sttts , @derekwaynecarr ,
Kind reminder about my question above.
Regards,
Mirek
Hi there @sttts , @derekwaynecarr ,
Kind reminder about my question above.
Regards,
Mirek
Hey @sttts @derekwaynecarr , Enhancement shadow for the v1.19
release cycle here. Just following up on my earlier update to inform you of the
upcoming Enhancement Freeze scheduled on Tuesday, May 19
.
Regards,
Mirek
@sttts @derekwaynecarr -- Unfortunately the deadline for the 1.19 Enhancement freeze has passed. For now this is being removed from the milestone and 1.19 tracking sheet. If there is a need to get this in, please file an enhancement exception.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Hi @sttts @derekwaynecarr
Enhancements Lead here. Any plans to graduate this in 1.20?
Thanks!
Kirsten
Nothing planned afaik.