Kind: External ETCD

Created on 23 Nov 2018  路  15Comments  路  Source: kubernetes-sigs/kind

Multi node might include scenarios with an external etcd.

A possible solution is to use an etcd container to be created in addition to the cluster nodes.

This raises the same questions/options discussed in https://github.com/kubernetes-sigs/kind/issues/134

opinions?
/assign

kindesign kinfeature lifecyclrotten prioritbacklog

All 15 comments

I've previously mentioned this to Fabrizio, but I'd like to maintain the status quo that all artifacts necessary to create a cluster are contained in a node-image snapshot, external etcd could be compatible with this via grabbing the etcd image from the node, but it might be a bit ugly.

I think this should probably be optional, with the default not being external etcd.

I think this should probably be optional, with the default not being external etcd.

yes, definitely. this is just a nice-to-have.
the goal is to be able to test external etcds, as this is a common scenario too.
currently working on a survey that has a local vs external etcd question for kubeadm.

I've previously mentioned this to Fabrizio, but I'd like to maintain the status quo that all artifacts necessary to create a cluster are contained in a node-image snapshot, external etcd could be compatible with this via grabbing the etcd image from the node, but it might be a bit ugly.

hm, so an external etcd node is a node where:

  • a slightly different unit file for the kubelet is used compared to the default one for kubeadm
  • kubeadm init phase commands are run instead of the kubeadm init parent command
  • after writing the etcd static pod manifest the kubelet picks it up
  • health check is performed using etcdctl + docker.

to my understanding this is already part of the node-image.

if we have access to a node that is in a blank state with - kubeadm and kubelet installed + and a list of IP it has to know we can run "some scripts" on it:
https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/
(it's slighly outdated)

and we can create an external etcd node.

i think the question would be who/what would spawn such a node and run the scripts. to me it seems like there has to be some sort of a controller for performing arbitrary actions.

but this is starting to feel like doing cluster api as kind being the provider. :)

well the CLI already performs actions over the nodes via the exec tooling, we just need to properly select them by node type and mange them a little better ref #131

if we have access to a node that is in a blank state with - kubeadm and kubelet installed

kind pretty much does this during part of bring-up currently, so this should be doable.

but this is starting to feel like doing cluster api as kind being the provider. :)

Funny, I did originally consider kind being a cluster API implementation, but it seemed goofy that all the existing tooling would require first creating a minikube cluster at some fixed version before creating a kind cluster at some other version... If not for that, I might have actually done that, with some much simpler config / APIs as an option for the average user.

Funny, I did originally consider kind being a cluster API implementation, but it seemed goofy that all the existing tooling would require first creating a minikube cluster at some fixed version before creating a kind cluster at some other version... If not for that, I might have actually done that, with some much simpler config / APIs as an option for the average user.

i think they removed minikube being mandatory as a pre-step, so using the clusterctl CLI tool it is now possible to just use an existing config:
https://github.com/kubernetes-sigs/cluster-api/blob/master/cmd/clusterctl/cmd/create_cluster.go#L133

Interesting. We'd still need like, a second kind cluster or something as far as I can tell... I don't think any local cluster implementation should depend on another cluster first :^)

so the alternative route there, i see as kind being a provider implementer of the cluster api, while the CLI being something like clusterctl. so the controllers and machine actuators would need to feed into docker nodes, somehow.

but i'm having difficulties envisioning this properly.
maybe @justinsb and @roberthbailey have already though about this. :)

and...i mean, the cluster api running it's machines under docker is a pretty amazing concept.
this would supposedly be like the pinacle of cluster testing... x)

Right now there are two ways to create a cluster using clusterctl and the clusterapi: 1) use a bootstrap and pivot the management of the cluster into the cluster or 2) use a "management" cluster and leave the management of the cluster in place for the target cluster.

For the first option, the default bootstrap cluster is minikube, but you can point clusterctl at any existing cluster and use that instead.

We've also discussed doing a "one-shot" type of deployment (no bootstrapping cluster) and a couple of other alternatives during the bootstrapping part. Part of the reason to break clusterctl into phases was to enable some experimentation around this part of the process.

Thanks for the input @roberthbailey

It sounds to me like for the moment it's most reasonable that kind might be experimented with as a cluster for use as a bootstrap cluster, but bringing kind itself up with the cluster API probably is not, which is pretty much what I expected.

kind sits closer to the minikube space, and we're not bringing that up with the cluster API either.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings