Containers-roadmap: [EKS]: Stop and Start Master Servers

Created on 7 Jun 2019  路  10Comments  路  Source: aws/containers-roadmap

_NOTE: This issue is a blatant copy of #133, which has generated a bit of interest but was closed by the author before it could be reviewed by AWS. Because the author hasn't been able to re-open that issue, I thought it best to re-create so that it still gets noticed. If preferred you can close this issue and re-open the original one instead._


Tell us about your request

Allow for shutting down and starting up the masters on-demand.

Which service(s) is this request for?

EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

There are two outcomes I'm trying to achieve.

  1. I just want to save money. I only need to run my application during the day and not at night. So I'd also like to have the masters up during the day and shutdown at night.

  2. To gain confidence in the system. I already believe that Kubernetes is very robust, but there are certain things out of my control that could still impact Kubernetes. For example, my company's internal network is not the best, so from time to time there could be network issues. I'd like to be certain, through my own testing, that Kubernetes is able to start up again under those odd conditions.

Are you currently working around this issue?

There is no way to currently shut down the master nodes.

EKS Proposed

Most helpful comment

We have a large base of developer systems and envs we 'auto pause' every night to save money for the business. We can do this with our usage of nearly every service within AWS, minus the EKS control plane.

Two suggestions

  • Have it 'zero bill' or charge a lower cost burn rate fee per hour when no associated kubelets are active
  • API command that can 'stop' the EKS control plane and another API call to 'start' it again - again perhaps charge a lower 'standby' reservation fee if there are concerns on AWS's side with stopped but allocated resources?

All 10 comments

We have a large base of developer systems and envs we 'auto pause' every night to save money for the business. We can do this with our usage of nearly every service within AWS, minus the EKS control plane.

Two suggestions

  • Have it 'zero bill' or charge a lower cost burn rate fee per hour when no associated kubelets are active
  • API command that can 'stop' the EKS control plane and another API call to 'start' it again - again perhaps charge a lower 'standby' reservation fee if there are concerns on AWS's side with stopped but allocated resources?

We currently use ASG schedules to shutdown our non-prod cluster nodes outside of working hours but our EKS control plane just sits there, doing nothing and costing us money. I'd love to see similar schedules for the control plane or something that tracks the desired number of instances specified for the worker ASG.

do we have any update on the above request?

Same as the others above, in my organization we need to run several EKS clusters for development purposes. Having a way to spin off the control plane instances would be something we would be interested in. Currently, we need to destroy the clusters everyday.

This is a highly desirable request. For small development clusters paying $144 per cluster per month for idle cluster could quickly add up to be a big bill (even if 10 developers try out EKS). This feature is available in almost all other AWS products!

Agree with everyone else, not having this feature is a turn off for small dev teams like myself.

We need this badly.. we are small team and want to save bill by doing so.
It would be great if any such option is enabled to stop / start K8s engine in EKS.
Any updates on this ?

Another possible use for this feature: when you need to do a rolling restart of the k8s control plane after (for example) complications resulting from associating a new CIDR block with your VPC. Currently AWS support has to do this, and I got the impression that they are not eager to do so (they didn't mention that they could, and first suggested upgrading k8s as a workaround).

I think it would make sense to couple this with #724 - where #724 would be a first step towards implementation.

Was this page helpful?
0 / 5 - 0 ratings