Current implementation of the kubeadm upgrade relies on the kubeadm-config configMap created at kubeadm init time.
Such configMap - that is the serialization of the master configuration file -, contains two kind of information:
To make kubeadm upgrade working in an HA scenario, with more than one master, the management of the second group of information should be improved, by adding the capability to track information specific to each master node (e.g. more than one nodeName)
Current issue described by this comment:
https://github.com/kubernetes/kubeadm/issues/546#issuecomment-365063404
@fabriziopandini I'm interested in picking this up but I'm a bit confused about how it would work / what exactly is needed.
NodeName is the only node-specific master attribute that I can identify - at least unless we wanted to support asymmetric configs across masters for some reason. What am I missing?
My understanding is that for kubeadm upgrade, NodeName is only used to find the control plane static pods belonging to the current node. How will we identify which master/node in the MasterConfiguration is the current node that kubeadm is running on? We can't assume that NodeName == hostname (which is the default if NodeName isn't provided) as that breaks things.
@mattkelly thanks for helping on this issue!
I happily share my personal opinion on how it would work / what exactly is needed, but please consider that it is necessary to get a wider consensus before starting to write a PR for this issue.
The key elements of my idea are:
kubeadm join --master as a way for adding new masters; this gives us the opportunity to have a strict on control node-specific parameters (and implicitly ensure consistency among all the other settings)kubeadm init or kubeadm join --master are executed, kubeadm should identify some kind of machine UID, and then store node-specific data in a dedicated configMap named kubeadm-config-machineUID (the same information will be stripped from the shared kubeadm--config configMap)kubeadm upgrade, as a first step, kubeadm should retrieve and merge the kubeadm-config configMap and the kubeadm-config-machineUID configMap for the current machine Detail are still TBD but let's discuss them in the breakout session or in slack if there is consensus on the approach 馃槈
I'm not yet qualified to really comment on whether that general approach would be acceptable, but it does seem reasonable to me. We already require a unique MAC address and product_uuid for each node, so we do have potential sources for UIDs on each master.
I agree, let's discuss more at the next breakout session (and people can continue to comment here) before I go off and start implementing.
@mattkelly feel free to add a new section for next week with an agenda item 馃檪
https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.48xxo9690nfd
When kubeadm init or kubeadm join --master are executed, kubeadm should identify some kind of machine UID, and then store node-specific data in a dedicated configMap named kubeadm-config-machineUID (the same information will be stripped from the shared kubeadm--config configMap)
Is there any overlap between this and dynamic kubelet config?
Is there any overlap between this and dynamic kubelet config?
@kargakis I don't think there is an overlap. Kubeadm applies the same dynamic kubelet config to all nodes, so IMO for the scope of this discussion the dynamic kubelet config is not a node-specific configuration.
We had a long discussion on this during last weeks call, I think the path forward was a simple proposal which can be linked here as well as a prototype which could help to determine whether a single of multiple config maps makes more sense.
@timothysc yup, sounds good to me. I wasn't sure if you would have further comments after reviewing the ticket more in-depth. I'll have something out for review within a few days.
/cc @liztio
this will be added as one of the requirements on the config KEP. It's also listed in the kubeadm office hour notes for 20180418
/assign @detiber @chuckha @rdodev
We need to go through an update to the docs using the control-plane-join
@timothysc IMO this issue should be closed as soon as https://github.com/kubernetes/kubernetes/pull/67944 merges
IMO this issue should be closed as soon as kubernetes/kubernetes#67944 merges
pinging @fabriziopandini and @timothysc for status.
Most helpful comment
@mattkelly thanks for helping on this issue!
I happily share my personal opinion on how it would work / what exactly is needed, but please consider that it is necessary to get a wider consensus before starting to write a PR for this issue.
The key elements of my idea are:
kubeadm join --masteras a way for adding new masters; this gives us the opportunity to have a strict on control node-specific parameters (and implicitly ensure consistency among all the other settings)kubeadm initorkubeadm join --masterare executed, kubeadm should identify some kind of machine UID, and then store node-specific data in a dedicated configMap named kubeadm-config-machineUID (the same information will be stripped from the shared kubeadm--config configMap)kubeadm upgrade, as a first step, kubeadm should retrieve and merge the kubeadm-config configMap and the kubeadm-config-machineUID configMap for the current machineDetail are still TBD but let's discuss them in the breakout session or in slack if there is consensus on the approach 馃槈