hi,
so i have a miss understanding of "Getting Start".
lets say i have this inventory:
[all]
k8s-m1 ansible_host=<IP> ip=<IP>
k8s-m2 ansible_host=<IP> ip=<IP>
k8s-m3 ansible_host=<IP> ip=<IP>
k8s-w1 ansible_host=<IP> ip=<IP>
k8s-w2 ansible_host=<IP> ip=<IP>
k8s-w3 ansible_host=<IP> ip=<IP>
[kube-master]
k8s-m1
k8s-m2
k8s-m3
[etcd]
k8s-m1
k8s-m2
k8s-m3
[kube-node]
k8s-w1
k8s-w2
k8s-w3
[k8s-cluster:children]
kube-master
kube-node
now let's say i want to add k8s-w4 .
if I wanna scale my cluster should i delete the other nodes from the inventory and only add the new node ?
and if I just wanna add a new worker node (WITHOUT DATA LOSS) should i use scale.yaml or cluster.yaml ?
if I add new node to existing inventory file and use scale.yaml , there won't be no data loss or broken cluster or anything?
thanks.
If you are adding a new node you only need to run the scale.yaml playbook and it won't re-walk through setting up all of the master components. Just add the new host to your inventory run the scale.yaml without removing anything, it won't affect the running nodes.
To remove a node rune playbooks/remove-node.yml --extra-vars "node=NODENAME". The removal will drain the node before it removes it, then remove it from the inventory file afterwards. If you don't specify the node= it will assume you mean ALL nodes, I found that one out the hard way on production.
thanks alot that helped me alot <3 @bwunderlich824
@bwunderlich824 report this as bug please, this utterly unacceptable
To remove a node rune playbooks/remove-node.yml --extra-vars "node=NODENAME". The removal will drain the node before it removes it, then remove it from the inventory file afterwards. If you don't specify the node= it will assume you mean ALL nodes, I found that one out the hard way on production.
@bwunderlich824 Yikes! Assuming to remove ALL nodes when --extra-vars "node=NODENAME" is not specified is scary. The default should be NONE with a message saying something like "Node to remove not specified."
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
If you are adding a new node you only need to run the scale.yaml playbook and it won't re-walk through setting up all of the master components. Just add the new host to your inventory run the scale.yaml without removing anything, it won't affect the running nodes.
To remove a node rune playbooks/remove-node.yml --extra-vars "node=NODENAME". The removal will drain the node before it removes it, then remove it from the inventory file afterwards. If you don't specify the node= it will assume you mean ALL nodes, I found that one out the hard way on production.