PRs
This work is being done by @pweil- and is reviewed by @derekwaynecarr, it is sponsored by @kubernetes/sig-node
@derekwaynecarr Could you help create a user story card for this feature?
@derekwaynecarr can you confirm that this feature targets alpha for 1.5?
@derekwaynecarr can you confirm that this feature targets alpha for 1.5?
Yes, this feature is experimental only so it would be considered alpha.
@derekwaynecarr @pweil- can you confirm that this item targets beta in 1.6?
@derekwaynecarr, the proposal https://github.com/kubernetes/kubernetes/pull/34569 was closed by bot due to inactivity.
@pweil-, in https://github.com/kubernetes/kubernetes/pull/34569#issuecomment-273531120 you've proposed the approach https://github.com/pweil-/kubernetes/commit/16f29ebb076dfa3c44c7c4669d55fe21c206e149 which changes the group of /var/lib/kubelet/pods
to the remapped root group. Do I understand it correctly that this is currently not tracked in any pull request?
@pweil-, I also wonder if similar to docker's /var/lib/docker/<uid>.<gid>
approach when --userns-remap
is used, it might make sense to use /var/lib/kubelet/pods-<uid>.<gid>
and just chown/chgroup everything in those subdirectories to the remapped <uid>.<gid>
. Why did you opt for just the chgrp and not the full chown?
@adelton in the end, I think having this be transparent to Kubernetes is the right approach. Whether that be something like shiftfs or implementation in the CRI (https://github.com/moby/moby/issues/28593). You are correct that my existing proposal is not currently tracked in an open PR anymore.
The reasoning behind using the chgrp was to follow our fsgroup
strategy where we just ensure group access instead of uid access.
Thanks @pweil-.
When you say transparent, you mean that nothing should be needed to be added to code or to configuration on Kubernetes' side to allow running under docker with userns-remap
?
As for the fsgroup
strategy, do you mean https://kubernetes.io/docs/concepts/policy/pod-security-policy/#fsgroup or some generic methodology within Kubernetes?
I have now filed https://github.com/kubernetes/kubernetes/pull/55707 as an alternative approach where I make the remapped uid/gid an explicit option, and use those values to chown/chgrp the necessary directories.
When you say transparent, you mean that nothing should be needed to be added to code or to configuration on Kubernetes' side to allow running under docker with userns-remap?
that would be ideal. Whether that is feasible (or more likely, feasible in an acceptable time frame) is another question :smile:
As for the fsgroup strategy, do you mean https://kubernetes.io/docs/concepts/policy/pod-security-policy/#fsgroup or some generic methodology within Kubernetes?
Yes
I have now filed kubernetes/kubernetes#55707 as an alternative approach where I make the remapped uid/gid an explicit option, and use those values to chown/chgrp the necessary directories.
:+1: subscribed
When you say transparent, you mean that nothing should be needed to be added to code or to configuration on Kubernetes' side to allow running under docker with userns-remap?
that would be ideal. Whether that is feasible (or more likely, feasible in an acceptable time frame) is another question
Ideally, the pod would specify how many distinct uids/gids it would require / list of uids it wants to see inside of the containers, and docker or different container runtime would setup the user namespace accordingly. But unless docker also changes ownership of the volumes mounted to the containers, Kubernetes will have to do that as part of the setup.
@pwel-, what is the best way to get some review and comments on https://github.com/kubernetes/kubernetes/pull/55707, to get it closer to mergeable state?
@pweil- ^
@adelton I would try to engage the sig-node folks either at their Tuesday meeting or on slack: https://github.com/kubernetes/community/tree/master/sig-node
@derekwaynecarr, could you please bring https://github.com/kubernetes/kubernetes/pull/55707 to sig-node's radar?
@pweil- @derekwaynecarr any progress on this feature is expected?
i will raise this topic in k8s 1.11 planning for sig-node.
Just leaving a note here..
Because most Kubernetes deployments have kube-system services that require root privileges on the host (e.g. overlay networks), we will need to support --userns=host
, and extend PodSecurityPolicies with a a permission around using its usage.
Edit: Although I assume that allowing pid=host
would include allowing userns=host
too.
Edit: Alright, that is actually exactly: https://github.com/kubernetes/kubernetes/pull/31169.
we will need to support --userns=host, and extend PodSecurityPolicies with a a permission around using its usage
PSP already has such support: see HostNetwork
@pweil- @derekwaynecarr
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
@justaugustus I am not actively working this so I'll defer to @adelton. @adelton can you comment on roadmap here? Thanks!
We are working for KEP for sig-node.
@justaugustus
I have done the following:
@derekwaynecarr thanks for the update!
@derekwaynecarr, the https://github.com/kubernetes/community/pull/2042 was superseded by https://github.com/kubernetes/community/pull/2067.
@derekwaynecarr, the kubernetes/community#2042 was superseded by kubernetes/community#2067.
Thanks @adelton, I've updated the description to reflect that.
@derekwaynecarr please fill out the appropriate line item of the
1.11 feature tracking spreadsheet
and open a placeholder docs PR against the
release-1.11
branch
by 5/25/2018 (tomorrow as I write this) if new docs or docs changes are
needed and a relevant PR has not yet been opened.
@derekwaynecarr -- What's the current status of this feature?
As we haven't heard from you with regards to some items, this feature has been moved to the Milestone risks
sheet within the 1.11 Features tracking spreadsheet.
Please update the line item for this feature on the Milestone risks
sheet ASAP AND ping myself and @idvoretskyi, so we can assess the feature status or we will need to officially remove it from the milestone.
@justaugustus @mistyhacks the PR is almost merged https://github.com/kubernetes/kubernetes/pull/64005
Just needs API approval which we should have before freeze.
@sjenning what's the Docs status? That's the reason the feature is currently listed as At Risk
.
@justaugustus docs will also be updated in coming couple of days.
@vikaschoudhary16 thanks for the update.
This feature will remain in the Milestone Risks
sheet until the Docs columns of the Features tracking spreadsheet are updated.
Please ping @idvoretskyi, @mistyhacks, and myself to let us know once this is updated, so we can clear it for the release.
Looks like this is slipping to 1.12 from kubernetes/kubernetes#64005
Moving this out of 1.11 based on this comment.
@derekwaynecarr @pweil- @sjenning @kubernetes/sig-node-feature-requests --
This feature was removed from the previous milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Happy shipping!
/cc @justaugustus @kacole2 @robertsandoval @rajendar38
This feature current has no milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Set the following:
Once this feature is appropriately updated, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
P.S. This was sent via automation
We are planning for this in 1.12.
Thanks. It's been added to the 1.12 tracking sheet.
updated description to capture that it slipped 1.11 and is being tracked in 1.12 as alpha.
Hey there! @derekwaynecarr I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?
The PR is still under review from @vikaschoudhary16, but he should be able
to open a place holder document.
On Mon, Aug 20, 2018 at 4:32 PM Zach Arnold notifications@github.com
wrote:
Hey there! @derekwaynecarr https://github.com/derekwaynecarr I'm the
wrangler for the Docs this release. Is there any chance I could have you
open up a docs PR against the release-1.12 branch as a placeholder? That
gives us more confidence in the feature shipping in this release and gives
me something to work with when we start doing reviews/edits. Thanks! If
this feature does not require docs, could you please update the features
tracking spreadsheet to reflect it?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/127#issuecomment-414453281,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF8dbP-2qZQZKSn9g9FCfYpxnO8iy9koks5uSxzqgaJpZM4KS4jV
.
Thanks! @vikaschoudhary16 let me know when the PR is up please. 😄
@vikaschoudhary16 @derekwaynecarr @mrunalp --
Any update on docs status for this feature? Are we still planning to land it for 1.12?
At this point, code freeze is upon us, and docs are due on 9/7 (2 days).
If we don't here anything back regarding this feature ASAP, we'll need to remove it from the milestone.
cc: @zparnold @jimangel @tfogo
moving to 1.13 milestone.
@derekwaynecarr do we feel confident this will hit the deadlines for 1.13? This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Thanks!
@derekwaynecarr
Just a friendly reminder about docs and status for 1.13. Thanks!
cc: @vikaschoudhary16
@derekwaynecarr
We operate k8s clusters that allow execution of third-party containers. From a security perspective, this is an an important feature we have been waiting for a while. Please consider this as high-priority security feature and make it available in v1.13 release.
@derekwaynecarr there has been no communication on the status but I see @spiffxp has attached a current k/k PR. Are we confident this is going to make the v1.13 milestone? Enhancement freeze is tomorrow COB. If there is no communication on this issue or activity on the PR, this is going to be pulled from the milestone as it doesn't fit with our "stability" theme. If there is no communication after COB tomorrow, an exception will be required to add it back to the milestone. Please let me know where we stand. Thanks!
lack of communication so this is being removed from 1.13 tracking.
/milestone clear
@derekwaynecarr this enhancement has been moved out of 1.13 due to lack of clarity on whats pending for this to land . We are officially in Enhancement freeze now. If this is a critical enhancement you need added back, it will require filing an exception with details as outlined there.
(Some automation I'm testing accidentally sent out a comment, which I've deleted to not make things confusing. Sorry!)
this is still under a longer than anticipated discussion to resolve and has
stalled for the moment. we can hopefully get past that issue in 1.14.
On Mon, Nov 19, 2018 at 10:57 PM Stephen Augustus notifications@github.com
wrote:
(Some automation I'm testing accidentally sent out a comment, which I've
deleted to not make things confusing. Sorry!)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/enhancements/issues/127#issuecomment-440129579,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF8dbAvKBPUb9GC9Us-4mJdS7pB0ds8tks5uw32ygaJpZM4KS4jV
.
Is there anything we could help with? We really need this feature in gitpod.io to give our users root privileges.
@svenefftinge Please review implementation PR, https://github.com/kubernetes/kubernetes/pull/64005
Hoping to get it merged in 1.14
@derekwaynecarr Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP
Hi @claurence -- KEP(Proposal) for this was already merged, kubernetes/community#2067. Hoping to get following implementation PR merged in 1.14:
kubernetes/kubernetes#64005
Along with updates to the original design proposal:
https://github.com/kubernetes/community/pull/2595
Thanks @vikaschoudhary16 - will this be implemented as alpha in 1.14 then? Based on the tagging it still has the alpha label but let me know if that is incorrect.
@vikaschoudhary16 Hello - is there a link to the KEP for this enhancement? I see links to the PR merges but I'm having trouble finding the KEP
Additionally for 1.14 are there any open PRs that should be merged for that release? if so let me know so we can add them to our sheet.
Hey @vikaschoudhary16 @derekwaynecarr 👋 I'm the v1.14 docs release lead. Just a friendly reminder we're looking for a PR against k/website (branch dev-1.14) due by Friday, March 1. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!
@claurence @jimangel We are going back and forth between implementation and design. As i mentioned in above comment, kubernetes/community#2595 is the design proposal PR that i am trying to get merged and based on that i will update implementation PR, kubernetes/kubernetes#64005.
@vikaschoudhary16 do you have a link to the KEP? There isn't one in the KEP folder that I can find - https://github.com/kubernetes/enhancements/tree/master/keps
@vikaschoudhary16 The linked proposal is not a KEP. It lacks a test plan, it lacks graduation criteria in the form of a checklist the release team can consume, and lacks discussion of upgrade/downgrade considerations. We need a KEP. It can link or reference the original design proposal to fill out some of the wordy bits around motivation, design, etc, but it needs what I just listed spelled out explicitly.
There seems to be continued unresolved discussion on the update to the proposal (ref: https://github.com/kubernetes/community/pull/2595)
I’m inclined to suggest that the release team block anything related to this landing in v1.14 until a KEP exists and is submitted through the exception process
@vikaschoudhary16 any update on a KEP for this issue? Currently this issue is at risk for the 1.14 release because we can't locate a KEP with test plans. Can you please share test plans and graduation criteria for this issue?
Really hoping this makes 1.14. Any plans to support this with Minikube as well?
As there hasn't been responses on the above questions in over two weeks, and no responses in slack when asked about it in sig-node slack channel this item is being removed from the 1.14 milestone.
As a result of runc
CVE-2019-5736, this feature now seems extremely relevant; re-building existing images to run as non-root (and addressing all the issues that arise from that change) is _substantially_ more lift than setting a single configuration option to map UID 0 to some other non-privileged user on the Node.
What do we need to do to get the ball rolling again?
I'm the enhancement lead for 1.15. Please let me know if this issue will have any work involved for this release cycle and update the original post reflect it. It will not be added to the tracking sheet otherwise. Thanks!
Hi @vikaschoudhary16 , I'm the 1.16 Enhancement Lead/Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
@kacole2 I do not anticipate this making further progress in 1.16 as we are still stalled on approach.
(reposting here too)
even if/until total userNS mapping is implemented, having a "runInHostUserNS" flag would help, and I would argue it is anyway required for the complete solution
having the flag without the userNS support would enable us to at least use the existing, node level Docker (not sure about the other runtimes) UID/GID remapping feature. currently the problem is that it is mutually exclusive with running stuff in host network namespace, but some infra containers need to run there (e.g. kube-proxy)
IMO at this point it should not be a question if we need userns support or not, the only question is how soon, and in what format :)
https://twitter.com/ChaosDatumz/status/1158556519623024642
having a Pod spec configurable run-in-host-user-ns flag, and using it together with existing run-time level feature(s) could serve as the temporary bridge solution until the real deal is introduced, and it would not be a wasted effort even on the long run
is there a way how to help push this feature forward?
Hey there @derekwaynecarr , 1.17 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha/beta/stable in 1.17?
The current release schedule is:
If you do, I'll add it to the 1.17 tracking sheet (https://bit.ly/k8s117-enhancement-tracking). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
Please note that all enhancements should have a KEP, the KEP PR should be merged, the KEP should be in an implementable state, have a testing plan and graduation criteria.
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
This looks dead. What's the status?
There were multiple attempts to get this done, but not there yet. Yes we need this feature. Please do not close.
Hey there @prbinu @derekwaynecarr -- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha in 1.18?
The current release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released
To be included in the release, this enhancement must have a merged KEP in the implementable
status. The KEP must also have graduation criteria and a Test Plan defined.
If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements
Thanks!
As a reminder @prbinu @derekwaynecarr,
Tuesday, January 28th EOD PST - Enhancements Freeze
Enhancements Freeze is in 7 days. If you seek inclusion in 1.18 please update as requested above.
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Hey there @vikaschoudhary16 -- 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?
In order to have this part of the release:
The current release schedule is:
If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
Thanks!
Hi there @derekwaynecarr , @vikaschoudhary16 ,
Kind reminder about my question above.
Regards,
Mirek
Hi there @derekwaynecarr , @vikaschoudhary16 ,
Kind reminder about my question above.
Regards,
Mirek
@msedzins Hi, I am currently working on this enhancement. This enhancement will not be graduating to 1.19 as it is still under development and testing. Thank you.
@mohammad-yazdani thank you for letting me know!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Hi @derekwaynecarr @mauriciovasquezbernal @mohamedsgap
Enhancements Lead here. Any plans for this to be alpha/beta/stable in 1.20?
Thanks!
Kirsten
Hello @kikisdeliveryservice, I'll keep you updated once we have further discussion with sig-node.
Thanks,
Mauricio.
Most helpful comment
@derekwaynecarr
We operate k8s clusters that allow execution of third-party containers. From a security perspective, this is an an important feature we have been waiting for a while. Please consider this as high-priority security feature and make it available in v1.13 release.