As a user/operator, I would like the option to specify a timeout for how long the machine controller will wait for a node to drain before forcibly removing the machine.
Today, if a user has specified a pod disruption budget that cannot be fulfilled, CAPI could wait forever on a machine/node that is running a pod whose budget could not be met and will never successfully drain. I would like the option to say ...
This would prevent CAPI from getting suck in an infinite reconcile loop during certain cluster conditions.
/kind feature
/help
/milestone v0.3.0
/priority backlog
@vincepri:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
/milestone v0.3.0
/priority backlog
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/cc
/milestone Next
@williamsandrew does https://github.com/kubernetes-sigs/cluster-api/pull/2165 solve this for you?
Oh wait, that PR is only for unreachable/unready nodes
Correct. My use case would include nodes that are accessible but are still unable to drain within a certain period of time.
/assign
Investigating what needs to be done here.
Just wanted to get some clarification on expected behavior.
In the Machine Controller, when we are draining the nodes, the default behavior is to Evict the pods before deleting them.
In this scenario of evicting the pods and then deleting them we are waiting forever for the pods to be deleted. The timeout property is set to MaxInt64 in evictPods().
https://github.com/kubernetes-sigs/cluster-api/blob/5f0a49e0fcf2a1cb90b54553e7836600e571aae6/third_party/kubernetes-drain/drain.go#L261-L273
However if we just delete the pods directly by calling deletePods() we only wait for 20s to delete.
Here timeout: globalTimeout which is 20s by default.
https://github.com/kubernetes-sigs/cluster-api/blob/5f0a49e0fcf2a1cb90b54553e7836600e571aae6/third_party/kubernetes-drain/drain.go#L315-L327
It seems that the MachineController always calls drainNode with DisableEviction: false (default value for kubedrain.Helper)
https://github.com/kubernetes-sigs/cluster-api/blob/5f0a49e0fcf2a1cb90b54553e7836600e571aae6/controllers/machine_controller.go#L389-L426
So we are going to wait forever when trying to delete pods after eviction. My question is Why are we waiting forever in one case but only 20s in the other?
Feel free to reach out on slack to discuss more. 馃檪
Other thoughts:
Timeout: 20s from?Thanks for the great write-up @wfernandes, let's sync up at the community meeting. +1 to using a different default where we're using MaxInt64, seems rather a large timeout. What would be a good default we can get in v0.3.7 @detiber ?
@vincepri I added a topic under the General Questions section for the July 8th meeting.
If I had to hazard a guess, the idea was to try to err on the side of workload safety, since pod deletion should be subject to any pod disruption budgets that are configured if we force early deletion we could be affecting configured pod disruption budgets.
It does seem that MaxInt64 is a tad big excessive, but I would think that 20s is probably a bit short if we want to avoid evicting or deleting user workloads that could violate disruption budgets.
That said, if we want to respect PDBs, then we need to also ensure we are calling the eviction APIs by default as well: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work
Hi all, is there anyone who is working with this issue? If not, I'd like to take care of it. My plan is:
Hey @namnx228 This issue is all yours. Feel free to /assign and mark it as /lifecycle active.
I believe this issue was specifically for worker machines and not the control plane nodes. However, if there is a use case for the control plane then I think this can be done all at once.
I don't have much context for this specifically but maybe @detiber and/or @vincepri can offer some insight. Else bring it up as a topic during this week's CAPI meeting.
/unassign
/assign
/lifecycle active
@wfernandes that is great, Thanks.
Yes, there is an use case that we need to have workload running on control plane in baremetal, so it would be good to take care of KCP as well.
I will bring ask on Slack or in the meeting if I need more explanations.
@namnx228 thanks for working on this!
First, add a NodeDrainTimeout parameter to KCP and machinedeployment, and these objects then pass the parameter to the corresponding machines.
I think it makes sense to add this to the KCP spec, but the MachineDeployment already has the MachineSpec, so I don't know that you'd need to add anything to the MachineDeployment (since you'll be adding a field to MachineSpec). WDYT?
@ncdc yes, that is a good idea to make use of the machineSpec instead of making change to the machinedeployment. Thanks for your suggestion.
Most helpful comment
@namnx228 thanks for working on this!
I think it makes sense to add this to the KCP spec, but the MachineDeployment already has the MachineSpec, so I don't know that you'd need to add anything to the MachineDeployment (since you'll be adding a field to MachineSpec). WDYT?