@luxas is present in the todo list
Yes! We'd love to get this working, I've just prioritized it lower than getting the other functionality implemented really well first. I've left a lot of the ground work with how the nodes are handled currently.
@fabriziopandini was going to sync up with me next week about up-streaming a working multi-node implementation :-)
/kind enhancement
/priority important-longterm
Hmm it seems you're not a member of kubernetes-sigs yet @fabriziopandini, if you join I'm happy to assign you as well :-)
Roughly, we discussed the need to:
Fabrizio demoed a very nice local "HA" cluster setup with his current patches, this should bring some very cool opportunities to cheaply testing interesting cluster topologies when it's done 馃槃
If anyone else has thoughts or requests for this, I'd love to hear them!
Fabrizio demoed a very nice local "HA" cluster setup with his current patches, this should bring some very cool opportunities to cheaply testing interesting cluster topologies when it's done smile
that's great news.
i wash i was part of your meeting to get a better context.
add support for tracking / recording the statuses of nodes (my initial thought: place a status record file in each node container, but please suggest options!)
adding state on disk is usually not the best of options, but it depends of what type of node status we want to store. we also have https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#ClusterStatus
hash out how flags / cli / configuration should look for this further, discuss this with you all and others
we can definitely share our thoughts after working on the kubeadm ComponentConfig for some many releases.
i wash i was part of your meeting to get a better context.
yes, it was fairly unplanned, in the future I'd love to meet with you and others about this.
adding state on disk is usually not the best of options, ...
well only "on disk" within the node containers, so it will be cleaned up and be associated with them, it's not so different compared to say, the logs.
but it depends of what type of node status we want to store. we also have https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#ClusterStatus
It needs to have the status of the node even before we've run kubeadm, from my understanding of @fabriziopandini's intent there.
we can definitely share our thoughts after working on the kubeadm ComponentConfig for some many releases.
I hoped so! I really want to ensure that it's easy for someone that "just needs kubernetes for testing" to use and understand and set the number of nodes etc, while allowing someone like you or Fabrizio to have full control over every step on every node.
well only "on disk" within the node containers, so it will be cleaned up and be associated with them, it's not so different compared to say, the logs.
so it will also be read the same way as the logs, i guess.
It needs to have the status of the node even before we've run kubeadm, from my understanding of @fabriziopandini's intent there.
interested to find out what status such files would hold.
I hoped so! I really want to ensure that it's easy for someone that "just needs kubernetes for testing" to use and understand and set the number of nodes etc, while allowing someone like you or Fabrizio to have full control over every step on every node.
:+1:
xref this on ComponentConfig for configuring all this topology stuff ;) https://docs.google.com/document/d/1nZnzJD9dC0xrtEla2Xa-J6zobbC9oltdHIJ3KKSSIhk/edit
Overall the direction of the thread sounds great to me!
/assign
I will followup with dedicated issue for each topic asap; this issue remains as a umbrella for general discussion for multi-node
/lifecycle active
We have this now ref #164, thanks to @fabriziopandini :-)
Most helpful comment
@luxas is present in the todo list