Eksctl: compatibility with aws autoscaler with nodegroup min=0

Created on 26 Jul 2019  路  3Comments  路  Source: weaveworks/eksctl

Why do you want this feature?
Currently, eksctl examples using the aws kubernetes autoscaler work when at least 1 node is always running. But we'd like to save on costs by scaling from 0 nodes. There are a few extra settings required for this:
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#scaling-a-node-group-to-0

What feature/behavior/change do you want?
The current workaround is to manually add node labels as tags in ASGs. For example in this node configuration

  - name: dask-worker
    instanceType: r5.2xlarge
    minSize: 0
    maxSize: 100
    volumeSize: 100
    volumeType: gp2
    labels:
      node-role.kubernetes.io/worker: worker
      k8s.dask.org/node-purpose: worker
    taints:
      k8s.dask.org/dedicated: 'worker:NoSchedule'
    desiredCapacity: 0
    ami: auto
    amiFamily: AmazonLinux2
    iam:
      withAddonPolicies:
        autoScaler: true
        efs: true

We currently have to manually add the following tags to the corresponding ASG:

k8s.io/cluster-autoscaler/node-template/label/k8s.dask.org/node-purpose     worker
k8s.io/cluster-autoscaler/node-template/taint/k8s.dask.org/dedicated     worker:NoSchedule

Perhaps a flag could be added to propagate labels in the config file to ASG tags when running eksctl create nodegroups ?

related:
https://github.com/weaveworks/eksctl/issues/1012#issuecomment-511491537
https://github.com/weaveworks/eksctl/issues/170

kinfeature

Most helpful comment

@scottyhq , this can be achieved using tags in your nodegroup config

nodeGroups:
  - name: autoscalingNodegroup
    instanceType: m5.xlarge
    desiredCapacity: 0
    minSize: 0
    maxSize: 10
    tags:
        k8s.io/cluster-autoscaler/node-template/label/k8s.dask.org/node-purpose     worker
        k8s.io/cluster-autoscaler/node-template/taint/k8s.dask.org/dedicated     "worker:NoSchedule"

All 3 comments

@scottyhq , this can be achieved using tags in your nodegroup config

nodeGroups:
  - name: autoscalingNodegroup
    instanceType: m5.xlarge
    desiredCapacity: 0
    minSize: 0
    maxSize: 10
    tags:
        k8s.io/cluster-autoscaler/node-template/label/k8s.dask.org/node-purpose     worker
        k8s.io/cluster-autoscaler/node-template/taint/k8s.dask.org/dedicated     "worker:NoSchedule"

Thanks @adamjohnson01 - just wanted to confirm that the above config works for on-demand nodes.

If people are using mixed spot instances scaling from zero requires kubernetes and autoscaler 1.14, which is now out https://github.com/kubernetes/autoscaler/issues/2246#issuecomment-520129217.

So I think this issue can be closed!

I'm having trouble scaling up from 0 with spot instances. Is that feature not available?

Was this page helpful?
0 / 5 - 0 ratings