Kind: Keep track of cluster status

Created on 23 Nov 2018  路  7Comments  路  Source: kubernetes-sigs/kind

Enabling multi-node support in kind and introducing actions creates the need of knowing not only the "cluster spec", but also the "cluster status" because e.g.

  • it is necessary to keep track of the IP address assigned to the container created for the control-plane in order to join nodes
  • it is necessary to keep track of the IP address assigned to the container created for the external etcd balancer in order to create a working kubeadm-config on control-plane nodes
  • it is necessary to keep track of the IP address assigned to the container created for the external load balancer in order to create a working kubeadm-config on control-plane nodes

The need of having a form of "cluster status" is even more relevant for enabling the execution of actions deferred from the cluster creation (like e.g. kubeadm upgrade, see https://github.com/kubernetes-sigs/kind/issues/131).

In terms of possible approaches to the implementation, possible solutions are:

  1. use host filesystem
  2. use container filesystem (already in use e.g. for KubeVersion)
  3. use other docker capabilities (docker inspect for ports/ips / labels for other things; already in use for cluster label)
  4. a mix of the above
  5. others?

opinions?

Most helpful comment

@neolit123 @neolit123 Considering how multi-node shaped out I think we can greatly simplify this:

  1. The cluster topology can be made discoverable by adding a node-role label at node creation
  2. All the ports internally to the cluster are now pinned to well-known values, so already easily discoverable today
  3. All the Kubernetes settings are reflected into the kubeadm-config file, that is already stored in a well know place and so easily discoverable

So only 1. is missing; rif PR https://github.com/kubernetes-sigs/kind/pull/248

All 7 comments

/assign

+1 for

use container filesystem (already in use e.g. for KubeVersion)

+1 for 2):
1) the host is something we have less control over and want to avoid leaving anything behind on if we can
2) we use 2) for tracking pretty much everything else related to the node except for 3)
3) I don't think anything suitable exists for 3), labels, ports etc are all at container creation time.

@BenTheElder @neolit123 quick update after some progress on multi-node

There is two type of info which should be part of status.

  • runtime info: IP, random ports reserved for containers.
    current working is to get runtime info from docker inspect (instead of extending config)
  • config: node role and control plane hooks should be available after create
    current save config.Node into nodes (use container filesystem)

current working is to get runtime info from docker inspect (instead of extending config)

we might have to extend the config actually. kind has plans to be non-docker centric and i have no idea what is the state of other CRIs for the same use case.

config: node role and control plane hooks should be available after create
current save config.Node into nodes (use container filesystem)

so i wonder what file format are we going to save to the FS?

@neolit123 @neolit123 Considering how multi-node shaped out I think we can greatly simplify this:

  1. The cluster topology can be made discoverable by adding a node-role label at node creation
  2. All the ports internally to the cluster are now pinned to well-known values, so already easily discoverable today
  3. All the Kubernetes settings are reflected into the kubeadm-config file, that is already stored in a well know place and so easily discoverable

So only 1. is missing; rif PR https://github.com/kubernetes-sigs/kind/pull/248

thanks! :-)

Was this page helpful?
0 / 5 - 0 ratings