_Please to keep this description up to date. This will help the Enhancement Team track efficiently the evolution of the enhancement_
/sig storage
/sig node
/sig scalability
@saad-ali
/stage alpha
@palnabarun - KEP has just been approved as implementable; can we start tracking it?
Hi @wojtek-t , the 1.18 Enhancements team will reach out when the release cycle for 1.18 begins to flip the tracked status and set the milestone.
Thank you for the updates on this enhancement.
@wojtek-t Thanks for the update. We'll track this for 1.18.
For your awareness, the release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released
Please make sure all the k/k PRs link here so we can track them.
/milestone v1.18
Hello, @wojtek-t I'm 1.18 docs lead.
Does this enhancement work planned for 1.18 require any new docs (or modifications to existing docs)? If not, can you please update the 1.18 Enhancement Tracker Sheet (or let me know and I'll do so)
If so, just a friendly reminder we're looking for a PR against k/website (branch dev-1.18) due by Friday, Feb 28th, it can just be a placeholder PR at this time. Let me know if you have any questions!
Yes we will add some doc - I will open it by the deadline.
Hey @wojtek-t, code freeze is March 5. Please link any PRs that are needed to complete this for 1.18, so we can track them in the release team. Thanks!
We seem to be code-complete, unless we find some issue. So only docs are missing.
Hello @wojtek-t
We are close to the docs placeholder PR deadline i.e less than week left for docs placeholder PR against the dev-1.18 branch. Having a placeholder PR in place will definitely help us in tracking enhancements much better.
Thanks! :)
I opened https://github.com/kubernetes/website/pull/19297 - it's the only remaining thing for Alpha
We seem to be complete for Alpha.
I didn't see it discussed in alternatives, but I still would like this supported for all resource types:
https://github.com/kubernetes/kubernetes/issues/10179
I didn't see it discussed in alternatives, but I still would like this supported for all resource types:
kubernetes/kubernetes#10179
We've been thinking about generalizing it too (@thockin explicitly suggested it), but the conclusion was that the particular semantic that we need isn't particularly useful for other resources. The problem is that to achieve the second goal (scalability aspects) we need to ensure that it will never change the contents (otherwise we would still need to track the object).
For preventing accidental modification/deletion, I think we want people to allow both, if they explicitly change the "protection" bit (especially deletion). I agree it would also be useful, but I think it's complimentary to this more-target feature.
I agree that this should be mentioned in the KEP - will add it in the next couple days.
If set, the machinery in apiserver will reject any updates of the object trying to change anything different than ObjectMetadata.
and
We will only reject requests that are explicitly changing keys and/or values stored in Secrets and/or ConfigMaps.
don't seem to exactly say the same thing. My concern being that it's not super clear if the Immutable
field itself is immutable.
My concern being that it's not super clear if the Immutable field itself is immutable.
Will make it more explicit in the KEP.
The Immutable
field:
[So you may always mark mutable Secret/ConfigMap as immutable, but you can't revert that decision.]
@palnabarun - can you please add it to tracking - we're going to promote it to Beta in 1.19.
@wojtek-t Thank you for the update. I have updated the details on the tracking sheet. :+1:
/stage beta
/milestone v1.19
Hello, @wojtek-t ! :wave: I'm one of the v1.19
Docs shadows. Does this enhancement work planned for v1.19
require any new docs (or modifications to existing docs)?
I saw that this was merged in March: https://github.com/kubernetes/website/pull/19297. I guess that this is the docs needed for this enhancement.
Regards,
Mikael.
@mikejoh - opened https://github.com/kubernetes/website/pull/21189
@wojtek-t Awesome, thanks!
Hi @wojtek-t !
As a reminder that the Code Freeze is June 25th. Can you please link all the k/k PRs or any other PRs that should be tracked for this enhancement?
Thanks!
The 1.19 Enhancements Team
kubernetes/kubernetes#89594 + kubernetes/perf-tests#1146 (already merged and linked above) is all we needed - so we're code complete already
Hi @wojtek-t just FYI - the description contains a link to the KEP which 404s - https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20191117-immutable-secrets-configmaps.md
I think you may need to change that URL to https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1412-immutable-secrets-and-configmaps/README.md
@alexellis - done
Thank you
Hi @wojtek-t !
This isn't super applicable to you as you are already code complete, but to follow-up on the email sent to k-dev today, I wanted to let you know that Code Freeze has been extended to Thursday, July 9th. You can see the revised schedule here: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.19
Please let me know if you have any questions. 馃槃
Best,
Kirsten
/milestone clear
(removing this enhancement issue from the v1.19 milestone as the milestone is complete)
Hi @wojtek-t
Enhancements Lead here. Just confirming that this KEP does not have intended work for 1.20? I see that you are aiming at 1.21 for stable - can you confirm?
Thanks!
Kirsten
Yes - no plans for 1.20.
@wojtek-t If you have a cluster with a large count of big unconsumed secrets this won't help the initial list that kubelet does to start it's watches right?
Was looking at large memory spikes when heavy users of helm restart many kubelets at once https://github.com/helm/helm/issues/8977 and was hoping making the releases helm stores immutable would help but since nobody wathes them it looks it would only impact correctness not api server load.
@wojtek-t If you have a cluster with a large count of big unconsumed secrets this won't help the initial list that kubelet does to start it's watches right?
I think I'm not fully following. When Kubelet is observing new pod (or update of existing one) it is doing a get (technically it's a list with field selector for metadata.name which translates it to get) for all not yet watched ones. Each of them is pretty cheap on their own as it returns a single item basically.
Kubelet is NOT listing/watching anything that none of its own pods are mounting.
I didn't read the whole helm/helm#8977 but the initial analysis doesn't seem correct because of the above. You would need to look into what kinds of requests are happening on kube-apiserver to understand better.
Hmm my assumption was that kubelet was doing a list/watch on all secrets at startup so it could update any mounted secret for it's pods. Let me confirm what were seeing in clusters with heavy large secret usage at kubelet startup
Yeah - that's not true. Kubelet is not listing everything - it's getting/watching one-by-one and only those that are used by the pods it's running.