API PRs:
/cc @tallclair
/sig auth
/milestone v1.12
/kind feature
/stage alpha
/assign @pbarker
Hey there! @pbarker I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?
@zparnold will do ๐ to be clear, do I write the docs there or is that something the docs folks do?
You'll be writing the docs, and we'll be making sure that it matches our style and clarity. (Mostly because we may not understand the feature as well as you do at present.) If you need help, please just let me or @jimangel know
ok here is the starter issue https://github.com/kubernetes/website/pull/9947
Thank you! I'll mark it on the FT spreadsheet!
On Mon, Aug 20, 2018 at 9:07 PM Patrick Barker notifications@github.com
wrote:
ok here is the starter issue kubernetes/website#9947
https://github.com/kubernetes/website/pull/9947โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/600#issuecomment-414526005,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AE81SELfIzplzIG3Xh8nq44M3EhHtsZDks5uS2t1gaJpZM4VnCpV
.
@pbarker --
Any update on docs status for this feature? Are we still planning to land it for 1.12?
At this point, code freeze is upon us, and docs are due on 9/7 (2 days).
If we don't here anything back regarding this feature ASAP, we'll need to remove it from the milestone.
cc: @zparnold @jimangel @tfogo
@justaugustus unfortunately this one has gotten caught up in the review process and will not be making 1.12
/milestone v1.13
@justaugustus I don't have the power to change the milestone, but we'll be targeting 1.13
/milestone v1.13
/milestone v1.13
(btw, pretty sure you have milestone powers here, @liggitt, but we hadn't turned the bot on until https://github.com/kubernetes/test-infra/pull/9252 merged)
@pbarker how confident are we of this going to Alpha in 1.13? Are the above 2 PRs the only pending ones?
Hey @AishSundar, there are 3 open PRs
https://github.com/kubernetes/kubernetes/pull/67547 The API, about done but hung up on one last question.
https://github.com/kubernetes/kubernetes/pull/67257 The implementation, mostly waiting on the API to finish
https://github.com/kubernetes/kubernetes/pull/69902 a small one for an integration test.
This slipped on 1.12 and we certainly intend on making 1.13. If there is anything you all can do to help wrangle people toward the goal that would be very appreciated ๐
Thanks @pbarker we will track the PRs listed above. Is there a doc PR associated with this feature? @tfogo as FYI
Thank you @AishSundar docs PR is here https://github.com/kubernetes/website/pull/9947. Also a quick update, the implementation got split into two pieces https://github.com/kubernetes/kubernetes/pull/67257 and https://github.com/kubernetes/kubernetes/pull/70021 to make it more digestible.
@tallclair is the main reviewer on this, but he will be in kubecon this next week so I wanted to give a quick update on what where we are here. The plugins PR kubernetes/kubernetes#70021 is very close to merging, should have that done by EOW. Changes made to the plugins piece has made it possible to simplify the handlers kubernetes/kubernetes#67257, those changes should be out today. Finally, there is an integration test https://github.com/kubernetes/kubernetes/pull/69902. We are coordinating to have a reviewer fill in for @tallclair while he's out for the last two PRs.
Hi @tallclair @pbarker I'm an enhancements shadow checking in on how this issue is tracking. Last comment stated it was close to merging. Code slush is on 11/9 and code freeze is coming up on 11/15 do you still feel confident this will make those dates?
@pbarker we are already in slush and fast approaching Code freeze for 1.13 this friday 11/16. Do you think this feature is still on track for 1.13?
Are kubernetes/kubernetes#67257 and kubernetes/kubernetes#69902 the only PRs we are tracking. I am more concerned about the latter merging soon. We would like for all pending work to be completed early this week, giving us atleast a couple fo days to watch and stabilize CI. Do you think this is feasible?
Hey @AishSundar and @claurence, we just merged the big piece on friday. The next PR https://github.com/kubernetes/kubernetes/pull/67257 isn't that large and I've already had a number of people look it over. This should go quick, then we just need the integration https://github.com/kubernetes/kubernetes/pull/69902 which should be very easy as well. I think we're on track but I would be happy to take any help keeping these moving. Thank you!
Thanks @pbarker I added some missing labels to the PRs (we need /sig, kind, priority, and milestone) to merge. Let mee check back on wednesday for latest status and Go/NoGo for 1.13. thanks
@pbarker I see kubernetes/kubernetes#69902 is still in active review. Any ETA for getting this in by tomorrow?
Hey @AishSundar, @liggitt has it in his queue, but he doesn't see it as a blocker for 1.13. I think we can stop tracking that here.
Just to clarify, this enhancement is still on for 1.13, it just kubernetes/kubernetes#69902 thats dropped from having to go in for 1.13?
So except for docs this is complete?
Just to clarify, this enhancement is still on for 1.13, it just kubernetes/kubernetes#69902 thats dropped from having to go in for 1.13?
So except for docs this is complete?
correct
Hi there, I am a bit confused with some part of this feature, if someone can please help and clarify this it would be great!
I saw that there are 2 types of Policy objects, one on the static backend and another defined in the scope of this feature and although they have some similarities they are still different, I also noticed this was mentioned in the docs:
The AuditSink policy differs from the legacy audit runtime policy.
This is because the API object serves different use cases.
The policy will continue to evolve to serve more use cases.
I thought that the goal of this feature was to define the entire backend (including the policy) in a dynamic manner, so if the static policy and the dynamic one are different how can this be achieved?
Are they planned on being identical at some point?
Or if they actually do serve different purposes - what are those purposes?
I hope my question was clear enough, please let me know if I need to clarify further.
Thanks!
Hey @omri86, the original goal was to just take the static definitions and make them available through the API. In the API review https://github.com/kubernetes/kubernetes/pull/67547, we decided it would be a good time to rethink how policy was handled. The outcome of those talks is https://github.com/kubernetes/kubernetes/pull/71230 which is a more composable way of handling policy, I'm in the middle of making an out of tree implementation here https://github.com/pbarker/audit-lab/tree/policy. There is further discussion needed on the matter that will likely happen after winter break. Let me know if there's anything else I can clarify
@pbarker Hello - Iโm the enhancementโs lead for 1.14 and Iโm checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind you that all enhancements for 1.14 need to have a KEP it looks like the KEP for this enhancement is here: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0014-dynamic-audit-configuration.md - let me know if that is not the correct KEP for this enhancement.
@pbarker 1.14 release enhancements shadow here. We are less than a week out from 1.14 enhancements freeze. Friendly ping on @claurence comment above.
Hey sorry @claurence I somehow missed your ping ๐ This feature is in alpha in 1.13, I didn't know if it should be kept open as we track it toward beta or open a separate one. WDYT?
@pbarker I think it gets left open until it graduates to stable based on what other issues do - is it targeting beta for 1.14 or should it be removed from the 1.14 milestone? Thanks
ok cool thanks @claurence we will be targeting 1.14 for beta
@pbarker looking over the KEP for this enhancement I don't see any testing plans - can someone help PR in testing plans for this enhancement? This information is helpful for knowing readiness of this feature for the release and is specifically useful for CI Signal.
If we don't have testing plans this enhancement will be at risk for being included in the 1.14 release
@claurence what do you mean by a testing plan? We have an integration test and an e2e test in the works, is there a document I'm missing?
@pbarker yes e2e test and integration tests are what we are looking for - when you have those can you PR them into the KEP?
will do ๐
Hi All,
We've entered code freeze and it looks like the two PRs y'all have in the description are still open: kubernetes/kubernetes#73981
kubernetes/kubernetes#73547
Because they are open and don't have "LGTM" and "Approve" labels we are removing this enhancement from the 1.14 release - if you feel this was done in error please talk with your relevant SIG or come in the sig-release slack channel to discuss
/milestone v1.15
@pbarker we're doing a KEP review for enhancements to be included in the Kubernetes v1.15 milestone. After reviewing your KEP, it's currently missing test plans which is required information per the KEP Template. This was also noted by @claurence above during the 1.14 release cycle. Please update the KEP to include the required information before the Kubernetes 1.15 Enhancement Freeze date of 4/30/2019.
@kacole2 this KEP was made before those new reqs and the integration and e2e tests are already merged to master, is it necessary to retroactively add them here? I understood the KEP to be a design document, not an ongoing sort of thing (unless thats changed)?
I'll defer to @claurence on what she wants to see since she is the lead for this release. Thank you for the quick response.
Hey @pbarker Just a friendly reminder we're looking for a PR against k/website (branch dev-1.15) due by Thursday, May 30. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!
@pbarker The placeholder PR against k8s.io dev-1.15 is due Thursday May 30th.
Hi
Should we be tracking more than:
kubernetes/kubernetes#73981
kubernetes/kubernetes#73547
If you know this will slip, please reply back and let us know. Thanks!
Hi @pbarker, today is code freeze for the 1.15 release cycle. The k/k PRs have not yet been merged (https://github.com/kubernetes/kubernetes/pull/73547 https://github.com/kubernetes/kubernetes/pull/73981). It's now being marked as At Risk in the 1.15 Enhancement Tracking Sheet. Is there a high confidence these will be merged by EOD PST today? After this point, only release-blocking issues and PRs will be allowed in the milestone with an exception.
yea this is unfortunately going to slip another release, we'll get it done in the next one, thanks @kacole2
We would like to see this issue make progress on 1.16 cycle. Are there any blockers beyond review bandwidth?
Hi @pbarker I'm the 1.16 Enhancement Shadow. Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet. If not's graduating, I will remove it from the milestone and change the tracked label.
Milestone dates are Enhancement Freeze 7/30 and Code Freeze 8/29.
Thank you.
@pbarker there wasn't any notification if this was going to make it in 1.16. Can you let me know if this should be tracked or not?
Hi @pbarker, I'm the v1.16 docs release shadow.
Does this enhancement require any new docs (or modifications)?
Just a friendly reminder we're looking for a PR against k/website (branch dev-1.16) due by Friday,August 23rd. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!
Thanks!
@pbarker @tamalsaha
Enhancement Freeze has passed for 1.16. I never received a response if this will be graduating stage in 1.16 or if work is being done. This is being removed from the 1.16 milestone. If this would like to be re-added, please file an exception and it will require approval from the release lead.
/milestone clear
@kacole2 , I believe @liggitt is the right person to contact on this feature. Please see his recent response https://github.com/kubernetes/kubernetes/pull/71230#issuecomment-511535527
My reading of @liggitt's comment is that this features is back to KEP phase.
Hey there @liggitt @pbarker -- 1.17 Enhancements shadow here ๐ . I wanted to check in and see if you think this Enhancement will be graduating to beta or new features will land in 1.17?
The current release schedule is:
If you do, I'll add it to the 1.17 tracking sheet (https://bit.ly/k8s117-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. ๐
Thanks!
@pbarker @tallclair @liggitt what is status of this Enhancement? If I read previous comments it makes me feel is this feature going to beta at all? I see this feature very useful and would like to see it in beta. Could we push this forward and get this to 1.18?
There was already PR to make this enhancement to beta in 1.15 but it was closed. So could we just reopen it (and rebase)?
I agree this feature needs a roadmap. I would say that at the moment, it's future is unknown. Can you share a little more about why you think it's useful, and how you plan to to use it?
VMware bought Heptio and this got put on the back burner as we integrated. We are now coming back around to it with more resources and will probably begin work on it early next year.
I have been quietly (well, I pinged pbarker a few times since he was working on it earlier this year.) following and waiting for this feature. We build a few k8s operators and we need this feature to do reliable and trusted usage-based-billing.
@tallclair , I thought you were working on it https://groups.google.com/forum/#!msg/kubernetes-sig-auth/Ha0C4cladCQ/JhAaOKBhFAAJ .
We don't have the resources to contribute to push code to upstream. But I would more than happy to contribute by reading design docs / KEPs . How can I do that?
More eyes on designs & KEPs is definitely helpful. Watching this issue, attending sig-auth and the mailing list is probably the best way to keep up.
Unfortunately I don't have time to be actively working on this outside of design reviews.
My interest in seeing this go to beta is to get parity with the ability to update the config dynamically, as is supported today for admission webhooks (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
For conformance we are currently unable to dynamically audit which components/tests are hitting which API endpoints without making a run around kubernetes, or setting the cluster up as such by default (which we're unable to do for hosted providers who don't expose alpha options). The goal would be to install a pod which registers itself as an audit webhook automagically, if installed by a user with sufficient privileges.
It's unclear to me whether PR's/KEP's like https://github.com/kubernetes/enhancements/pull/1259 imply there is more done in refining the config before enabling dynamic configuration
@pbarker Enhancements shadow for 1.18 here. Are you targeting anything for this in 1.18? We need to track it if so. The release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released
Nothing planned for 1.18 per @liggitt
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
FYI, we'll be further discussing 2 potential paths forward for this feature at the next sig-auth meeting (2020-04-29):
Hey there @pbarker, 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?
In order to have this part of the release:
The current release schedule is:
If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. ๐
Thanks!
We're currently discussing the future of this feature in sig-auth, but no changes are planned for v1.19.
@tallclair Thank you very much for following up. I've updated the tracking sheet accordingly. ๐
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
This feature was removed in 1.19 (details in https://groups.google.com/g/kubernetes-sig-auth/c/aV_nXpa5uWU)