Both Azure and Google Cloud allow for an in place resizing of node groups (if it's an autoscaling node group, then you can change the min and max size of the node group). I have not found any way for eksctl to replicate this functionality, and it is quite inconvenient. For example, I have predictable spikes in usage, and I would like to change the min number of nodes in an autoscaling pool in order to anticipate these spikes in usage, so I don't have the delay of spinning up new nodes. However, I don't want to delete and recreate the node group because there will be existing pods in the old node group that cannot be migrated.
Is this a feature on the roadmap (I have not been able to find any documentation on it)? Or is there an existing workaround that isn't documented?
Hi @albertmichaelj :wave:
Have you tried the command eksctl scale nodegroup --cluster <cluster name> --name <nodegroup name> --nodes <new desired capacity>?
Example:
$ eksctl scale nodegroup --cluster test-1 --name ng-1 --nodes 4
[鈩筣 scaling nodegroup stack "eksctl-test-1-nodegroup-ng-1" in cluster eksctl-test-1-cluster
[鈩筣 scaling nodegroup, desired capacity from 1 to 4, max size from 2 to 4
Is this what you are looking for?
@martina-if I think he wants to change the minimum size of the ASG after creation, rather than just the desired size. I am guessing he wants to use the cluster-autoscaler but when he sees load coming he wants to boost the minimum ASG size to prevent the cluster-autoscaler scaling down below that.
@albertmichaelj I agree there should be a way to change the min/max too with `scale nodegroup. For the cluster-autoscaler, you can configure it to provision different amounts of reserve capacity, so it may be easier to do this in-cluster by reconfiguring the cluster-autoscaler when you see load coming.
@whereisaaron Yeah, I don't think that changing desired capacity is quite the same thing. This seems to be a feature offered be CLI interfaces for other cloud platforms (Google Cloud and Azure). I also think you can do this from the AWS console. It would be nice to have an easy way to do it with eksctl. I'm not sure how I would configure the cluster autoscaler. I am using the autodiscovery config for the cluster autoscaler, so it's not as simple as changing the number of nodes in the autoscaler config. Any way, competing tools do this, so it would probably be good to have it in eksctl.
Cluster autoscaler over-provisioning is a cluster-internal technique, so it works the same on any cloud. Auto-discovery doesn鈥檛 affect over-provisioning, so you can use both at once.
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler
@whereisaaron I have looked at cluster overprovisioning in this way. In fact, I am actually running a JupyterHub server on EKS for a university setting, and there is overprovisioning through "dummy" pods built in. However, this doesn't really solve the problem. What I would like to do is right before class starts, I scale up the min size of the cluster to the point where all students can start a Jupyter notebook server. I don't want any additional nodes running, but I still want to have capacity in the cluster since there may be students in other sections who decide to log in to do their homework. I am okay if it takes a moment for them to spin up a pod since the cluster might have to add new nodes.
However, with the overprovisioning, what would happen is that the "dummy" pod deployment would add more nodes to accomodate the dummy pods up to the maximum number. So, I would be paying for effectively empty nodes since I know how many nodes I need for the students to use them in class. Does this distinction make sense as to why I want to reset the minimum size directly? As far as I know, this is not possible with the cluster autoscaler (at least in autodiscovery mode). If it is, I would love advice on how to do this.
If just changing min-size for a while is what you want to do, I think you could just use the AWS console/api/clip to change it directly. Just note that any CFN / eksctl change will probably set it back to the original setting.
Bigger picture, I think maybe you are trying to do manual and automatic scaling at the same time in the same node pool. Perhaps keep your current node pool and add a second node pool with a 0 to 100 or whatever range that is manually scaled with eksctl scale. Then you can scale it to 0 most of the time, and then add the specific number of extra nodes you are after right before class?
I agree that I can change it in the AWS console, and I also agree that I can achieve the same goal by having two different node pools and managing preferred affinities and required affinities. However, it would be much simpler to run a simple eksctl resize like command and the min number of nodes would be changed.
This is a feature request, and I understand there are other ways to achieve my goals. However, this feature would be the best way to achieve it, and it is a feature that competing platform CLIs have. I encourage the project to consider adding it. If there is no intention to add it please feel fee to close this issue.
@albertmichaelj I agree there should be a way to change the min/max too with
scale nodegroup
Oh as I said at the outset, I agree eksctl scale nodegroup should be extended to be able to change the min, max, and desired.
Perhaps retitle this to target that feature request, since that is I think separate issue from autoscaling?It is more manual scaling I think? Which we should be able to do, e.g. manually increase the max to scale up. Or the minimum, for your use case.
Sounds good. I updated the title. I can't figure out how to add a label (maybe I don't have the ability). Let me know if the title is not sufficiently clear.
Thanks!
I can't figure out how to add a label
We don't expose this in eksctl in any way right now, but you should be able to easily do it with kubectl:
kubectl label nodes -l alpha.eksctl.io/nodegroup-name=ng-1 new-label=foo
@errordeveloper When I said "I can't figure out how to add a label" I meant to the issue. I wanted to add "FEATURE REQUEST" as an explicit label for this issue, but I couldn't figure out how to do it. So, I just typed it in. This is just to make it easier to track and organize the issue. If you have the ability, please tag the issue.
@albertmichaelj ah, I see! That's a github limitation, one has to have write access to the repo.
@albertmichaelj - thanks for bringing this up. We are also running JupyterHubs and temporarily need to scale minimum node numbers for workshops (most of the time we want to scale everything to zero).
Oh as I said at the outset, I agree eksctl scale nodegroup should be extended to be able to change the min, max, and desired.
Just want to echo that this would be useful. Currently running eksctl scale nodegroup --cluster=my-cluster --nodes=1 --name=my-nodes has a temporary affect if the nodegroup is set up with a min-size of 0 because the autoscaler will remove it.
Our current workaround is to run this before a workshop:
aws autoscaling update-auto-scaling-group --auto-scaling-group-name $ASG_NAME --min-size=20
Isn't this documented here? https://github.com/weaveworks/eksctl/blob/master/examples/05-advanced-nodegroups.yaml#L13-L14
but not in the "official docs", which don't have some other things either
@matti I don't think that is the same thing. That is setting a minimum at nodegroup creation time. What this feature request is about is changing the minimum (and maximum) size of an ASG after creation. This is more about mutability of ASGs then it is about the inability to set something at creation time.
Following @scottyhq suggestion we use something like this as workaround for now:
# Retrieve ASG name in AWS
ASG_NAME=$(aws autoscaling describe-auto-scaling-groups
--query 'AutoScalingGroups[?contains(Tags[?Key==`alpha.eksctl.io/nodegroup-name`].Value, `$NG_NAME`)].AutoScalingGroupName' | jq -r 'first')
# Update it
aws autoscaling update-auto-scaling-group --auto-scaling-group-name $ASG_NAME --min-size=2
aws autoscaling update-auto-scaling-group --auto-scaling-group-name $ASG_NAME --max-size=10
It would be nice indeed to have this feature to update nodegroups for mutable parameters such as min or max size, it would make implementation of Infrastructure as Code and GitOps principles easier using eksctl
+1
I had this same issue so I added the --nodes-max parameter to my command and it worked:
$ eksctl scale nodegroup \
--cluster EKS-course-cluster \
--nodes 5 \
--nodes-max 5 \
--name ng-1 \
--profile my-aws-profile \
--region us-east-2
Regards,
I used you command but getting this error
Error: failed to scale nodegroup for cluster "EKS-course-cluster", error the desired nodes 2 is less than current nodes-min/minSize 3
Hi @albertmichaelj 馃憢
Have you tried the commandeksctl scale nodegroup --cluster <cluster name> --name <nodegroup name> --nodes <new desired capacity>?Example:
$ eksctl scale nodegroup --cluster test-1 --name ng-1 --nodes 4 [鈩筣 scaling nodegroup stack "eksctl-test-1-nodegroup-ng-1" in cluster eksctl-test-1-cluster [鈩筣 scaling nodegroup, desired capacity from 1 to 4, max size from 2 to 4Is this what you are looking for?
I am getting this error
Error: failed to scale nodegroup for cluster "EKS-course-cluster", error the desired nodes 2 is less than current nodes-min/minSize 3
Hi,
Is there a way to update max count without touching desired count ?
@shivani-hotstar , yes, this ticket is resolved. you can now use nodes-max and nodes-min with scale ng command.
Most helpful comment
Following @scottyhq suggestion we use something like this as workaround for now:
It would be nice indeed to have this feature to
updatenodegroups for mutable parameters such as min or max size, it would make implementation of Infrastructure as Code and GitOps principles easier usingeksctl