EDIT: this issue is tracking the potential to persist configuration options from the old coredns deployment after upgrade.
this PR made it possible to persist the replica count:
https://github.com/kubernetes/kubernetes/pull/85837
FEATURE REQUEST
If this is a FEATURE REQUEST, please:
kubeadm version (use kubeadm version):
Environment:
kubectl version): 1.16.3uname -a): n/aAfter cluster creation with the default replica count set. Scaling the coredns replicas up and then performing upgrade caused replicas to be reset to default
dns replicas set back to kubeadm default
after a default kubeadm init
using the following command
kubectl -n kube-system scale --replicas=15 deployment.apps/coredns
and then performing an upgrade
kubeadm upgrade apply
resets the replicas back to the default. Along with this persistence during upgrades. It would also be nice to have this configurable during init.
What you expected to happen?
dns replicas set back to kubeadm default
IIUC, you expect the opposite: dns replicas _not_ set back to kubeadm default.
Regarding persistence through upgrades: I _think_ that if the coredns deployment yaml used by kubeadm did not contain a replica count, the number of replicas would not be reset when upgrading (assuming kubeadm does the equivalent of kubectl apply).
@pickledrick as i've mentioned on the PR, v1beta2 is locked for new features.
this is the tracking issue for v1beta3 https://github.com/kubernetes/kubeadm/issues/1796
also @ereslibre is working on a document related to how to handle coredns Deployment issues.
instead of modifying the API i proposed to @pickledrick to allow upgrades to keep the existing replica count instead of redeploying with 2.
cc @rajansandeep @chrisohaver WDYT?
... allow upgrades to keep the existing replica count instead of redeploying with 2
@neolit123, sounds fine to me. One very simple way of doing this could be to remove the replica count line from the coredns Deployment yaml.
Commenting here instead of the PR. I am a bit opposed to having a replica count field in the kubeadm config. This is specific to the DNS deployment and not to kubeadm itself.
Also, users who are used to using kubectl to scale the deployment will have to update the ClusterConfiguration too.
What we should do itself is to base our updated deployment on the one currently in use in the cluster (in effect patch it).
That way we won't touch the replicas, annotations, labels or anything else, that users have modified in their DNS deployments.
I am a bit opposed to having a replica count field in the kubeadm config. This is specific to the DNS deployment and not to kubeadm itself.
I also think we shoud not include the replica count in the kubeadm config.
What we should do itself is to base our updated deployment on the one currently in use in the cluster (in effect patch it).
That way we won't touch the replicas, annotations, labels or anything else, that users have modified in their DNS deployments.
Yes, and I'll follow up with the document about that. There are slightly different approaches, but all of them should patch and not override existing deployment settings. Didn't have the time yet but we should be able to do something along these lines.
Hi Everyone,
Thanks for the comments, I have taken another go at this given the feedback from the related PR.
The idea is to check if there is an existing DNS deployment and don't touch it if it exists (unless switching between kube-dns and CoreDNS types) without making changes to the API.
keeping this issue open to track the potential persisting of more fields than just the replica count:
https://github.com/kubernetes/kubeadm/issues/1943#issuecomment-563157198
/assign @rajansandeep @ereslibre
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
let's log separate tickets for extra options that should persists.
/close
@neolit123: Closing this issue.
In response to this:
let's log separate tickets for extra options that should persists.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
instead of modifying the API i proposed to @pickledrick to allow upgrades to keep the existing replica count instead of redeploying with 2.
cc @rajansandeep @chrisohaver WDYT?