Seccomp support is providing the ability to define seccomp profiles and configure pods to run with those profiles. This includes the ability control usage of the profiles via PSP as well as maintaining the ability to run as unconfined or with the default container runtime profile.
KEP: sig-node/20190717-seccomp-ga.md
Latest PR to update the KEP: #1747
/pkg/apis/...
)@kubernetes/api
@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/api
@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
@kubernetes/feature-reviewers
on this issue to get approval before checking this off_FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers
._
FEATURE_STATUS: IN_DEVELOPMENT
More advice:
Design
@kubernetes/feature-reviewers
_ member, you can check this checkbox, and the reviewer will apply the "design-complete" label.Coding
@kubernetes/feature-reviewers
and they willDocs
@kubernetes/docs
.@derekwaynecarr @sttts @erictune didn't see an issue for this but it is already in alpha. Creating this as the reminder to push it through to beta and stable.
@sttts could you provide the appropriate links to docs and PRs? I think you are closest to this code.
@pweil- @sttts - per our discussion, this is a feature we would like to sponsor in Kubernetes 1.6 under @kubernetes/sig-node
@pweil- @derekwaynecarr please, confirm that this feature has to be set with 1.6 milestone.
@idvoretskyi we target to move it to beta for 1.6.
@sttts thanks.
Looks like this is still alpha:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/seccomp.md
https://github.com/kubernetes/kubernetes/blob/master/pkg/api/annotation_key_constants.go#L35
And I couldn't find any documentation on kubernetes.io/docs.
@pweil- any updates for 1.8? Is this feature still on track for the release?
@idvoretskyi this was not a priority for 1.8. @php-coder can you add a card to this for our PM planning? We need to stop letting this fall through the cracks and get it moved to beta and GA.
@pweil- if this feature is not planned for 1.8 - please, update the milestone with the "next-milestone" or "1.9"
I'd like to see this get to beta. Priorities (or requirements) for that include:
SecurityContext
(see https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#alpha-field-in-existing-api-version)docker/default
should still be allowed if the runtime is docker (for backwards compatibility)Is anyone interested in driving this work for the 1.9 (or 1.10) milestone? @jessfraz @kubernetes/sig-auth-feature-requests and @kubernetes/sig-node-feature-requests I'm looking at you :wink:
Also relevant: https://github.com/kubernetes/community/pull/660 (do we need to settle the decisions in that PR before proceeding?)
/cc @destijl
If someone has time and wants to do it they are more than welcome to and I
will help answer any questions
On Sep 22, 2017 20:54, "Tim Allclair (St. Clair)" notifications@github.com
wrote:
/cc @destijl https://github.com/destijl
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/135#issuecomment-331593048,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABYNbDldlrwbOP75Y2AKM-bUFLnwrq0eks5slFbcgaJpZM4KgBy_
.
ok I will update the proposal and start on this tomorrow if no one else will ;)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Hey @jessfraz not sure if you got anywhere on this - I don't have bandwidth to code it, but happy to help test...
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
/reopen
/lifecycle frozen
/remove-lifecycle rotten
@php-coder: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/reopen
/lifecycle frozen
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/lifecycle frozen
On Mon, Mar 26, 2018 at 7:07 AM, k8s-ci-robot notifications@github.com
wrote:
@php-coder https://github.com/php-coder: you can't re-open an issue/PR
unless you authored it or you are assigned to it.In response to this
https://github.com/kubernetes/features/issues/135#issuecomment-376129291
:/reopen
/lifecycle frozen
/remove-lifecycle rottenInstructions for interacting with me using PR comments are available here
https://git.k8s.io/community/contributors/devel/pull-requests.md. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:
repository.โ
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/135#issuecomment-376129294,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p9EwKebniej_GySRKSvzrCMITOA1ks5tiMvrgaJpZM4KgBy_
.
@smarterclayton: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/reopen
/lifecycle frozenOn Mon, Mar 26, 2018 at 7:07 AM, k8s-ci-robot notifications@github.com
wrote:@php-coder https://github.com/php-coder: you can't re-open an issue/PR
unless you authored it or you are assigned to it.In response to this
https://github.com/kubernetes/features/issues/135#issuecomment-376129291
:/reopen
/lifecycle frozen
/remove-lifecycle rottenInstructions for interacting with me using PR comments are available here
https://git.k8s.io/community/contributors/devel/pull-requests.md. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:
repository.โ
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/135#issuecomment-376129294,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p9EwKebniej_GySRKSvzrCMITOA1ks5tiMvrgaJpZM4KgBy_
.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@idvoretskyi: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Ihor 1, bot 0
@pweil- @php-coder @jessfraz
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
@wangzhen127 is working on it, but can't be assigned as he's not a member yet.
https://github.com/kubernetes/kubernetes/pull/62662
https://github.com/kubernetes/kubernetes/pull/62671
Thanks for the update, Tim!
/remove-lifecycle frozen
@pweil- @tallclair -- We're doing one more sweep of the 1.11 Features tracking spreadsheet.
Would you mind filling in any incomplete / blank fields for this feature's line item?
@pweil- @tallclair -- this feature has been removed from the 1.11 milestone, as there have been no updates w.r.t. progress or docs.
cc: @jberkus
@pweil- @tallclair @kubernetes/sig-auth-feature-requests @kubernetes/sig-node-feature-requests --
This feature was removed from the previous milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.
If so, please ensure that this issue is up-to-date with ALL of the following information:
Happy shipping!
/cc @justaugustus @kacole2 @robertsandoval @rajendar38
No changes planned for 1.12
Thanks for the update, @tallclair!
Is anyone interested in driving this work for the 1.9 (or 1.10) milestone? @jessfraz @kubernetes/sig-auth-feature-requests and @kubernetes/sig-node-feature-requests I'm looking at you wink
@tallclair I can try to pick this up now if still desirable
@stlaz: Reiterating the mentions to trigger a notification:
@kubernetes/sig-auth-feature-requests, @kubernetes/sig-node-feature-requests
In response to this:
Is anyone interested in driving this work for the 1.9 (or 1.10) milestone? @jessfraz @kubernetes/sig-auth-feature-requests and @kubernetes/sig-node-feature-requests I'm looking at you wink
@tallclair I can try to pick this up now if still desirable
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@stlaz, this feature is still desired. I have spent some time adding seccomp profiles to addons as the first steps of #39845. But I haven't got enough time to push the feature. It would be nice if you like working on this. Any help is welcome! :)
@wangzhen127 Thanks, I was trying to go through the things that were done and the issues opened related to this feature. It would seem that the comment https://github.com/kubernetes/features/issues/135#issuecomment-331592961 still holds and summarizes exactly the work that needs to be done now.
I also noticed you were trying to add a FeatureGate for this, but closed the PR, why was that?
P.S.: Sorry for the late response, I was away for a bit.
It would seem that the comment #135 (comment) still holds and summarizes exactly the work that needs to be done now.
That is right. One more thing I would like to add is to have a "complaint" mode. So users have a choice of getting warnings (in logs) of using forbidden syscalls rather than kill. Logging seccomp actions are available with Linux kernel 4.14+ (seccomp doc). It is possible that older kernel versions are still being used. So we need to handle that. This will also need to be added to OCI spec.
I also noticed you were trying to add a FeatureGate for this, but closed the PR, why was that?
The purpose of that feature gate was to change seccomp default profile from unconfined to "runtime/default". I got many concerns on backward compatibility, so that seemed unlikely to happen. We are currently lack of a plan of changing security defaults in general because those are breaking. The best approach I currently think is to push seccomp to stable and it still has to be an opt-in feature rather than opt-out.
Logging seccomp actions are available with Linux kernel 4.14+ (seccomp doc).
I guess since we'll be defining a Kubernetes-specific default seccomp format as a part of the second step, we could also have one that would log instead. Is there enough value for it? Could it be used for people to transition from "unconfined" to "kube/default" when the latter fails too much? Would they care for it to switch-to switch-back?
There are LTS distributions using 4.13- Linux kernels (Debian - 8,9; RHEL - 6, 7; Ubuntu LTS - 14.04, 16.04), so Kernel compatibility is definitely something to keep in mind.
I got many concerns on backward compatibility, so that seemed unlikely to happen.
The container runtimes had to go through this change in the past, too, when they were enabling seccomp for the first time. Right now at least docker ships with default behavior which is less permitting than "unconfined", which could have possibly broken someone. I don't think we're doing anything wrong if we just follow the behavior of the underlying components (that also provide the choice of turning this behavior off).
Is there enough value for it?
This can be discussed. My original thought was to change default from unconfined to logging. So we do not have backward compatibility issue. And if we can somehow collect data and show that in X% of cases, we see nothing logged, meaning the default profile would not break things. Then we can propose to change logging to kill. Collecting data part is tricky and can be a lot of work. Even if we don't actually go that route, I think having a logging profile would benefit people when they want to try seccomp out but are not confident yet.
The container runtimes had to go through this change in the past, too, when they were enabling seccomp for the first time. Right now at least docker ships with default behavior which is less permitting than "unconfined", which could have possibly broken someone. I don't think we're doing anything wrong if we just follow the behavior of the underlying components (that also provide the choice of turning this behavior off).
When docker changed the default value, kubernetes explicitly reset such value to unconfined. I reached out to sig-architecture folks offline before and they are very worried about the backward compatibility issue.
And if we can somehow collect data and show that in X% of cases, we see nothing logged, meaning the default profile would not break things.
I like this idea. The hard part is of course getting the data, I have no idea how to pull that one off. Also, we'd have to first propose this change to the OCI spec and then probably implement it for at least one container runtime, right? Would that be OK to happen in the Beta part of the lifecycle?
When docker changed the default value, kubernetes explicitly reset such value to unconfined. I reached out to sig-architecture folks offline before and they are very worried about the backward compatibility issue.
I see. Perhaps we could indeed just go with the "unconfined" profile as the default one (possibly replacing it with something like kube/logging
later on). It seems that this might then rather be controlled by PSPs in a deny-rule manner, where we start with the assumption that everything is allowed by default and we're only cutting the privileges further on. Having a flag control over this may be useful for cases where PSPs are turned off, though, so that one should go in too, yet having these two mechanisms used at once would probably be a bit messy.
I guess it's a bit different direction than originally considered - it goes against the work done in https://github.com/kubernetes/kubernetes/issues/39845, but if we agree on the above, we should then think of the seccomp profiles we'll eventually ship. So far I'm seeing runtime/default
, kube/default
, kube/logging
, along with the option to set the profile to unconfined
. The rest is of course the ability to have localhost/.*
profiles, which is already provided by the current implementation.
Would that be OK to happen in the Beta part of the lifecycle?
Sounds good to me. Though I think it helps to start the OCI spec proposal early.
Go with "unconfined" as the default for now sounds good to me. For kubernetes/kubernetes#39845, I added annotations to addons. And I don't think we can go any further.
So far I'm seeing runtime/default, kube/default, kube/logging, along with the option to set the profile to unconfined. The rest is of course the ability to have localhost/.* profiles, which is already provided by the current implementation.
Works for me. For kube/default
, we may just start with docker/default
.
Logging seccomp actions are available with Linux kernel 4.14+ (seccomp doc).
My understanding is this logs the action with the PID - not necessarily container-related info. So either auditd or some other daemon on the host will need to do a lookup/mapping for the log to be really useful...
And if we can somehow collect data and show that in X% of cases, we see nothing logged, meaning the default profile would not break things. Then we can propose to change logging to kill. Collecting data part is tricky and can be a lot of work.
Didn't @jessfraz already do that when launching the docker default profile for docker? If that isn't a strong enough signal, it's going to be very difficult to collect a stronger one...
@tallclair you're right, I got kind of lost in all the issues' comments. For the reference, here's the comment stating Dockerfiles were checked: https://github.com/kubernetes/community/pull/660#issuecomment-303860107. I guess we could be safe having a "killing" default after all.
Hi
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more โstableโ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Enhancement issues opened in kubernetes/enhancements
should never be marked as frozen.
Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.
/remove-lifecycle frozen
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Hello @stlaz @pweil- , I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet. This will also require a KEP to be included into 1.15
Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.
no changes planned for 1.15
Hi @tallclair @pweil- @stlaz , I'm the 1.16 Enhancement Lead/Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
Once coding begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
As a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
I have the beginnings of a plan to bring it to GA, but it might be a stretch to get to it in 1.16. I'll try to get a proposal out by enhancements freeze though.
Hello @tallclair @pweil- , 1.17 Enhancement Shadow here! ๐
I wanted to reach out to see *if this enhancement will be graduating to alpha/beta/stable in 1.17?โจโจ*
Please let me know so that this enhancement can be added to 1.17 tracking sheet.
Thank you!
๐Friendly Reminder
A Kubernetes Enhancement Proposal (KEP) must meet the following criteria before Enhancement Freeze to be accepted into the release
implementable
stateAll relevant k/k PRs should be listed in this issue
Yes, I plan to graduate this to stable in v1.17 - KEP here: https://github.com/kubernetes/enhancements/pull/1148
Hey @tallclair , I will add this enhancement to the tracking sheet to be tracked ๐
Please see the message above for friendly reminders and note that KEP is in a provisional state. KEP must be in an implementable state to be added to 1.17 release.
/milestone v1.17
/stage stable
Hey @tallclair Could you please post links to the tests in testgrid and keep track of any tests added for this enhancement?
Thank you!
Will do. There are a bunch of seccomp tests already, but I can't find it on any dashboard tabs (is there anyway to search across all testgrids for a specific test?)
https://github.com/kubernetes/kubernetes/blob/0956acbed17fb71e76e3fbde89f2da9f8ec8b603/test/e2e/node/security_context.go#L147-L177
@tallclair there isn't a good way to search across all of testgrid =/
My best guess (at least for the 4 you referenced) is that they aren't actually being included. ๐ฌ
They look like they should be a part of the node-kubelet-features dashboard, but the job config for ci-kubernetes-node-kubelet-features has this for it's test_args
:
--test_args=--nodes=8 --focus="\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Serial\]"
The ginkgo tests themselves are tagged with [Feature:Seccomp]
and the focus flag wouldn't match.
I think we should just remove the feature tag once this moves to GA. I think seccomp is standard on linux, so the [LinuxOnly]
tag should be sufficient.
For the general problem of tests not being run, I filed https://github.com/kubernetes/test-infra/issues/14647
Hey @tallclair , We're only 5 days away from the Enhancements Freeze (Tuesday, October 15, EOD PST). Another friendly reminder that to be able to graduate this in the 1.17 release, KEP must be merged in and must be in implementable state. Looks like the KEP is still open and is in a provisional state.
Hey @tallclair, unfortunately deadline for 1.17 enhancement freeze has passed and looks like the KEP is still open. I will be removing this enhancement from the 1.17 milestone.
Please note that you can file an enhancement exception if you need to get this in for 1.17
/milestone clear
Yeah, didn't make the cut. Hoping to get it into
/milestone v1.18
That sounds good! I'll mark this as deferred to v1.18 in the enhancement tracking sheet.
Hey ๐, is there anything we can do to move this one forward. Iโd be happy to contribute here as well as for the AppArmor issue.
Hey @tallclair
1.18 Enhancements team checking in! Are you planning on graduating to stable in 1.18? It looks like the KEP is still open.
The release schedule is as follows:
Enhance Freeze: January 28
Code Freeze: March 5
Docs Ready: March 16
v1.18 Release: March 24
As a reminder, the KEP needs to be merged, with the status set to implementable
.
Thanks!
@saschagrunert thanks for the offer! I need to take another pass at the KEP to follow up from the API review I had with @liggitt. Once the KEP is approved, I'd welcome your help with the implementation.
I think the biggest open question on the KEP right now is how to handle the localhost profile type. Since we want to deprecate the feature (ideally in favor of something like https://github.com/kubernetes/enhancements/pull/1269, /cc @pjbgf ), I'd like to avoid putting a field for it in the API.
Hey @tallclair, any update on if this will make it into 1.18 or not? It's currently marked in the milestone but you haven't confirmed if we should track this or not.
Thanks!
v1.18 is seeming unlikely for this. I think we can bump to
/milestone v1.19
Great, thanks for the update @tallclair :)
Hi @tallclair -- 1.19 Enhancements Lead here, do you plan to graduate this Enhancement to stable
this release?
No plans to graduate in v1.19. I have an open KEP, but won't be working on it this quarter. @pjbgf may pick it up in v1.20
@tallclair -- Thank you for the updates. :slightly_smiling_face:
/milestone v1.20
There was a slight change of plans on this one - as agreed on yesterday's sig-node meeting. This is now planned for:
/milestone v1.19
@pjbgf: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your and have them propose you as an additional delegate for this responsibility.
In response to this:
There was a slight change of plans on this one - as agreed on yesterday's sig-node meeting. This is now planned for:
/milestone v1.19
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@palnabarun Do you mind setting this issue to milestone v1.19 please?
/assign pjbgf
/milestone v1.19
Thank you @pjbgf @tallclair for the update. I've updated the tracking sheet according to your plans.
Can you please let me know which graduating stage are you targeting and a link to the KEP?
Thank you! Appreciate all the efforts. :slightly_smiling_face:
The current release schedule is:
Targeting GA
KEP: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190717-seccomp-ga.md
The KEP still has unresolved sections, with follow-up PR: https://github.com/kubernetes/enhancements/pull/1747
Hi @tallclair, thank you for the update. :+1:
I have updated the tracking sheet accordingly.
PS: I have updated the issue description with links to the KEP and the latest KEP update PR.
thank you @palnabarun for the update. :+1:
Hi @tallclair ๐ 1.19 docs shadow here! Does this enhancement work planned for 1.19 require new or modification to docs?
Friendly reminder that if new/modification to docs are required, a placeholder PR against k/website (branch dev-1.19
) are needed by Friday, June 12.
@annajung that's for the heads up. Yes, there will be some modification to the seccomp docs.
Adding @hasheddan who is picking this up (https://github.com/kubernetes/kubernetes/issues/58211).
Great, thank you for the update! I will modify the tracking sheet accordingly.
Please let us know once a placeholder PR against k/website has been made. Thank you!
@pjbgf -- Can you please link to all the implementation PR's here - k/k or otherwise? :slightly_smiling_face:
The current release schedule is:
@palnabarun here's the current ones:
https://github.com/kubernetes/kubernetes/pull/91381
https://github.com/kubernetes/kubernetes/pull/91408
https://github.com/kubernetes/kubernetes/pull/91182
https://github.com/kubernetes/kubernetes/pull/91442
I also created an umbrella issue that contained all of them.
Hi @pjbgf @hasheddan Just a friendly reminder that a placeholder PR against k/website is due by Friday, June 12. Please let me know once you have made the PR, thank you!
@annajung thanks for the reminder! It will be open soon :+1:
Hi @pjbgf -- Thanks for creating the umbrella issue. :+1:
We are tracking the same. :slightly_smiling_face:
Hi @pjbgf -- just wanted to check in about the progress of the enhancement.
The release timeline has been revised recently, more details of which can be found here.
Please let me know if you have any questions. :slightly_smiling_face:
The revised release schedule is:
Thank you for the update @palnabarun. The code is mostly all done, but we are now waiting for a follow-up review. Overall, we are still looking good. :+1:
Hi @pjbgf @hasheddan , a friendly reminder of the next deadline coming up.
Please remember to populate your placeholder doc PR and get it ready for review by Monday, July 6th.
Hi @pjbgf @hasheddan :wave:, I see that there are still 3 pending action items in https://github.com/kubernetes/kubernetes/issues/91286 for implementation related changes and 1 pending action item for documentation. Do you think they will make it past the code freeze on Thursday, July 9th?
Thank you. :slightly_smiling_face:
Code Freeze begins on Thursday, July 9th EOD PST
@palnabarun docs PR is mostly ready, just adding a specific guide for seccomp. Already have a LGTM from @saschagrunert on the current changes. Thank you for keeping us on track here :)
Hi @hasheddan, thanks for the update above. Just a quick reminder to get your doc PR ready for review (Remove WIP/rebased/all ready to go) by EOD. Thank you!
@annajung will do! Thanks!
@hasheddan -- Thank you for the update. :smile:
@pjbgf -- I saw that in https://github.com/kubernetes/kubernetes/issues/91286 two core action items are yet to be merged and are not in the merge pool too. Do you think they will go in before the Code Freeze?
Thank you. :slightly_smiling_face:
@palnabarun we are trying to get it done before the code freeze, after all its been all lgtm already. We are having some issues with some flaky tests atm. ๐
Thank you for the heads up.
For clarity the 2 prs we are waiting to merge are:
https://github.com/kubernetes/kubernetes/pull/91408 and https://github.com/kubernetes/kubernetes/pull/92856
The latter (https://github.com/kubernetes/kubernetes/pull/92856) seems to failing a verify check. According to https://github.com/kubernetes/kubernetes/pull/92856#issuecomment-655950700 this will require a rebase/repush/rereview before it can merge..
@kikisdeliveryservice thank you for the clarification. We are waiting for the flaky tests on https://github.com/kubernetes/kubernetes/pull/91408 to stop failing, once that is merged we can rebase the second PR that depends on it.
Hi @pjbgf :wave:, We are into the Code Freeze now.
Since, https://github.com/kubernetes/kubernetes/pull/91408 is in the merge pool and https://github.com/kubernetes/kubernetes/pull/92856 requires a rebase over https://github.com/kubernetes/kubernetes/pull/91408 according to https://github.com/kubernetes/kubernetes/pull/92856#issuecomment-655950700, we feel that the best action here would be file an exception request to get additional time in completing the second PR after the merge pool gets clear.
Removing the enhancement from the milestone for the time being.
Thank you!
Best,
Kubernetes v1.19 Release Enhancements Team
/milestone clear
Since, kubernetes/kubernetes#91408 is in the merge pool and kubernetes/kubernetes#92856 requires a rebase over kubernetes/kubernetes#91408 according to kubernetes/kubernetes#92856 (comment), we feel that the best action here would be file an exception request to get additional time in completing the second PR after the merge pool gets clear.
A rebase on an approved PR in the merge queue does not require an exception request. The PR was code complete and approved a full day before the deadline.
Hi @liggitt :wave:, thank you for your inputs. :+1:
We are going to include the enhancement back into the cycle. All our apprehensions were regarding the rebase. Since that is sorted, this is good to go. :slightly_smiling_face:
/milestone v1.19
@pjbgf @saschagrunert @hasheddan -- thank you for all your contributions. :100:
Thank you @palnabarun for the detailed tracking of the enhancement. We appreciate it! ๐
@saschagrunert the final PR kubernetes/kubernetes#92856 merge at last. Congrats! I will update the tracking sheet to reflect this.
@tallclair @pjbgf do you think we can close this issue now since seccomp is GA?
@saschagrunert We usually wait for the release to happen, then mark the corresponding KEP as implemented
and then close the enhancement issue.
Please feel free to go ahead and stage a change to mark https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190717-seccomp-ga.md as implemented
. :slightly_smiling_face:
@saschagrunert We usually wait for the release to happen, then mark the corresponding KEP as
implemented
and then close the enhancement issue.Please feel free to go ahead and stage a change to mark https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190717-seccomp-ga.md as
implemented
.
Thank you for the clarification, opened a PR in https://github.com/kubernetes/enhancements/pull/1932
The KEP has been updated to implemented (PR finally merged!)
Please feel free to close this issue @saschagrunert
Thank you all for all of your work!!
/milestone clear
/close
That's done.
@saschagrunert I can't remember if we discussed this before, but is there any plan to eventually clean up the annotations (i.e. remove support)? The primary motivation for doing so is so that 3rd party tools (e.g. gatekeeper, krail) don't need to know to check both the annotation & the field
@saschagrunert I can't remember if we discussed this before, but is there any plan to eventually clean up the annotations (i.e. remove support)? The primary motivation for doing so is so that 3rd party tools (e.g. gatekeeper, krail) don't need to know to check both the annotation & the field
Yes, this is planned for v1.23. This incorporates with the warning mechanism (not yet done), which can be done after the necessary utility functions exist (ref https://github.com/kubernetes/kubernetes/issues/94626).
From the KEP:
To raise awareness of annotation usage (in case of old automation), a warning mechanism will be used to highlight that support will be dropped in v1.23. The mechanisms being considerated are audit annotations, annotations on the object, events, or a warning as described in KEP #1693.
โฆ
Since we support up to 2 minor releases of version skew between the master and node, annotations must continue to be supported and backfilled for at least 2 versions passed the initial implementation. However, we can decide to extend support farther to reduce breakage. If this feature is implemented in v1.19, I propose v1.23 as a target for removal of the old behavior.
Do you prefer to reopen this issue until those bits are implemented as well?
Yeah, let's keep this open until the feature is fully wrapped up. Is there a k/k issue filed for the work you described?
Yeah, let's keep this open until the feature is fully wrapped up. Is there a k/k issue filed for the work you described?
Now we have one, there: https://github.com/kubernetes/kubernetes/issues/95171 :)
Most helpful comment
Didn't @jessfraz already do that when launching the docker default profile for docker? If that isn't a strong enough signal, it's going to be very difficult to collect a stronger one...