Filed on behalf of a Rancher customer:
I am trying to build an Aws environment for K3s and want to make them so they will not have down time.
My concept is to fire up a master using a Launch Config/Autoscale group and keep one always running.
Then have another launch config / autoscaling group to start multiple nodes.
Everytime the Master terminates and spins up again it has a new token that will be required for the nodes to connect.
I tried to use a MySql DB to have this control the tokens as the back-end, but that spins new token everytime master terminates and starts again.
How do I have the master spin up with either a pre-defined token or save the token for reuse when it starts again.
I am trying to do exactly what the customer is describing.
Right now my temporary solution is to mount, automatically in my cloud-init script, an EFS in /mnt/efs then start (also in my cloud-init) the k3s server with something like --data-dir /mnt/efs/var/lib/rancher/k3s.
So when the server instance dies, ASG spin a new instance up, and the _new_ k3s server will sees its _old_ state.
It seems that k3s supports this as far as I can see right now.
I am open the hear a better solution though.
K3S_TOKEN is autogenerated every time as you mentioned. But there is also K3S_CLUSTER_SECRET variable which can be defined by a user (and works the same way as token). So instead using token, you can specify K3S_CLUSTER_SECRET=somepassword when starting up the server and then use the same env variable K3S_CLUSTER_SECRET when starting the nodes to join the server.
Most helpful comment
K3S_TOKEN is autogenerated every time as you mentioned. But there is also K3S_CLUSTER_SECRET variable which can be defined by a user (and works the same way as token). So instead using token, you can specify K3S_CLUSTER_SECRET=somepassword when starting up the server and then use the same env variable K3S_CLUSTER_SECRET when starting the nodes to join the server.