Tell us about your request
What do you want us to build?
Allow for shutting down and starting up the masters on-demand.
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
What outcome are you trying to achieve, ultimately, and why is it hard/impossible to do right now? What is the impact of not having this problem solved? The more details you can provide, the better we'll be able to understand and solve the problem.
There are two outcomes I'm trying to achieve.
Are you currently working around this issue?
How are you currently solving this problem?
There is no way to currently shut down the master nodes.
Additional context
Anything else we should know?
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
@jason-riddle How many applications are you deploying to Kubernetes? There are other containerized platforms perhaps like ECS or Fargate that could be triggered to spin up/down by a cron job which will take down those individual application instead of a large platform like EKS. Kubernetes isn't really designed to be used in this way.
There are other containerized platforms perhaps like ECS or Fargate that could be triggered to spin up/down by a cron job which will take down those individual application instead of a large platform like EKS.
You make a great point. ECS and Fargate would be a better fit for spinning down the application at night. But I'm more curious about your second point.
Kubernetes isn't really designed to be used in this way.
What makes you say this? Are you suggesting that the scheduler should be up all the time? With GKE, (I believe) you can completely scale down your cluster with the following
gcloud container clusters resize $CLUSTER_NAME --size=0 --zone $ZONE
Also, why should this be treated differently than shutting down any EC2 instance or even an RDS instance?
@jason-riddle I'm not a GKE user but I believe that command just spins down the worker nodes to 0. This is different than turning off the control plane. In EKS you can do this also since your worker nodes are just an autoscaling group that you control. You can scale them down as well and your application just wont have anywhere to run. However the kubernetes control plane (EKS) will still be running and has to maintain quorum. If you were to turn this off it would also include the Etcd cluster and everything else that it takes to run.
@jason-riddle I'm in agreement with @micahlmartin. Upon reading your issue, my first thought was to simply scale the worker nodes down to 0. The bulk of the operating costs will be in the EC2 instances, storage, load-balancers, etc. Even with worker nodes stopped, you would still have to pay for EBS volume storage.
The EKS control plane (which represents the traditional master nodes), only costs $144/month, a fraction of what it would cost to maintain that setup yourself. Rumor has it that AWS may even be reducing that cost.
We run multiple clusters across several accounts at my organization, and they are all set up with cluster-autoscaling and node-scaling (HPA, VPA). This allows us to scale to 1 or 0 worker nodes automatically when workloads are low or non-existent.
In the case of your internal network not being reliable, the EKS control plane is not impacted by that, since it's running entirely outside of your network on AWS.
All very good points. I’ll go ahead and close this since this doesn’t seem like a very useful in reality.
Hi, which is the state of this issue? Our team also need to stop/shutdown master eks clusters to save money. We are currently testing EKS to deploy our applications. (Coming from Fargate which is not the same way to deploy apps.)
Thanks.
I doubt it makes much sense for AWS to stop and start the control plane, especially the etcd cluster which is probably a pooled hosting behind the scenes. But you could ask AWS to automatically charge less when zero nodes are registered.
@jason-riddle Would you mind reopening this issue? I understand the points against it but I would still be interested in AWS potentially supporting it if they could - and they haven’t said no yet ;)
Reason being it allows flexibility for smaller dev clusters - even potentially allowing any of our devs to spin up clusters - without costs getting out of control. This isn’t a concern on GKE as the control plane is free.
I agree. I am a consultant and would be nice if I could shut things down
for the evening and spin back up in the AM or tear down and rebuild
everyday which eats up another 20-30 minutes minimum.
On Mon, May 13, 2019 at 5:01 PM Tim Malone notifications@github.com wrote:
@jason-riddle https://github.com/jason-riddle Would you mind reopening
this issue? I understand the points against it but I would still be
interested in AWS potentially supporting it if they could - and they
haven’t said no yet ;)Reason being it allows flexibility for smaller dev clusters - even
potentially allowing any of our devs to spin up clusters - without costs
getting out of control. This isn’t a concern on GKE as the control plane is
free.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/aws/containers-roadmap/issues/133?email_source=notifications&email_token=ABGJHCM7XHUXNLE3JV426VTPVHQMZA5CNFSM4GSO7GN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVJWAPY#issuecomment-492003391,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABGJHCISNLTX5FQAP2OZZU3PVHQMZANCNFSM4GSO7GNQ
.
This would be super useful. It was mentioned that a single cluster is around 140$ which is reasonable but If I maintain arbitrary clusters in the numbers of 10s, that would be a substantial cost. Currently, I tear down and create clusters from scratch, which is an extra effort on my side.
GKE doesn't charge for the control plane, just sayin' AWS.
GKE doesn't charge for the control plane, just sayin' AWS.
Google also has no SLA on GKE, same goes for AKS. So, If the master is not available you won't get any money back. You get what you pay for.
Strongly desire the ability to "stop" a cluster (just like you do with EC2 instances) so that dev/sandbox/experimentation instances can sit overnight/weekends without wasting money.
Because the author of this issue hasn't reopened it, and I don't think AWS have seen it yet, I've re-created this issue fresh at #318.
I can't fine any option for shutting down and starting up the masters on-demand.
To save the money, you can change the desired state to 0 in the auto scaling group of EC2.
Does any one know how to restart the master node using kubectl or from Aws console ?
I'm just getting up to speed with EKS with this workshop: https://www.eksworkshop.com/ and thought it would be great for an individual who is just learning EKS to easily deactivate or hibernate it. As it was mentioned above, the data plane can be easily scaled down with autoscaling group but this is not possible for the control plane, unfortunately. If I wanted to play with EKS for a month just to get better at it on a very small scale, that 144/m isn't low. The alternative is tearing everything down and rebuilding every morning.
Most helpful comment
@jason-riddle Would you mind reopening this issue? I understand the points against it but I would still be interested in AWS potentially supporting it if they could - and they haven’t said no yet ;)
Reason being it allows flexibility for smaller dev clusters - even potentially allowing any of our devs to spin up clusters - without costs getting out of control. This isn’t a concern on GKE as the control plane is free.