Containers-roadmap: [Fargate] [request]: Allow to increase container disk space

Created on 21 Jun 2019  路  16Comments  路  Source: aws/containers-roadmap

Tell us about your request
It looks like disk volume one gets in Fargate is ~10 GB. It would be helpful to be able to configure the size it in Task/Container configuration. In our case, we would like to go as high as 150 - 200 GB.
(link to doc confirming this limit: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_limits.html)

Which service(s) is this request for?
Fargate

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We have a service with a fairly simple flow: It downloads a zip file, extracts useful information from the file and publishes it into external database. The problem is - those zip files can be up to 100 GB in size.

The full flow we tested is: messages come into SQS queue -> Lambda reads from SQS and invokes Fargate task -> Fargate does the job

The workload is very spiky - sometimes we have dozens of files coming in at the same time and sometimes we have nothing for a day or two. So Lambda+Fargate combination works perfectly for us because we process jobs as soon as we get them without any wait, but at the same time don't pay for any resources when we have nothing to process. And nothing to manage too!

Are you currently working around this issue?
Old version of our service runs on multiple EC2 instances. For the new version we're currently evaluating options and since Fargate cannot be used looking into EKS.

Additional context
From other requests in this area it looks like you're already working on the option to attach EBS/EFS volumes, but in our case we don't need persistent storage - we just need more space which is created and destroyed with the container.

Fargate Proposed Work in Progress

Most helpful comment

Yes, we too have a use case for this where we are currently performing some ingest tasks in fargate on new media files as they are uploaded to an s3 bucket and the existing (small) storage limit is restricting the sizes of files we can process.

Our image file is quite small. We basically just need slightly larger (ephemeral) storage to download the file to and output derivative thumbnails to before saving them to s3.

Pretty much exactly the same scenario as the OP (Xantrul).

The ability to choose from predefined storage options (just like cpu and memory is for fargate) would be a perfect implementation for this. We already choose task memory allocation based on the input file size and it would be trivial to also spec the storage allocation in the same logic.

All 16 comments

I second that! I really enjoy using ECS with Fargate. Not having to manage the underlying infrastructure is really fantastic.

Tell us about your request
We have worker images, that have big machine learning models in them. We want to scale up and down based on demand and it would make our lifes so much easier if we can use Fargate for that.
The docker image size is currently over 15GB, running into the container limits.

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We want to easily scale up and down based on demand. Currently we prefer to use managed services as much as possible, to minimize the management overhead. While we could use ECS with EC2, Fargate is the preferred option at this point.

Are you currently working around this issue?
Currently evaluating the options - maybe going for ECS with EC2.

Other requested ways to increase Fargate storage include mounting EFS or mounting S3 storage
Ref #53 #412

Other requested ways to increase Fargate storage include mounting EFS or mounting S3 storage
Ref #53 #412

Yes, but for our use case ability to increase disk space directly would be preferred because (a) it is less components to provision/manage/delete and (b) it is unclear what would read/write speed of the attached EFS/S3 volume look like.

We would also be interested in this. Our use case would be deploying and running a large bundled application (15GB+) via our Jenkins CI on ECS/Fargate. Currently we are too limited on space to achieve this.

Yes, we too have a use case for this where we are currently performing some ingest tasks in fargate on new media files as they are uploaded to an s3 bucket and the existing (small) storage limit is restricting the sizes of files we can process.

Our image file is quite small. We basically just need slightly larger (ephemeral) storage to download the file to and output derivative thumbnails to before saving them to s3.

Pretty much exactly the same scenario as the OP (Xantrul).

The ability to choose from predefined storage options (just like cpu and memory is for fargate) would be a perfect implementation for this. We already choose task memory allocation based on the input file size and it would be trivial to also spec the storage allocation in the same logic.

Thumbs up for this request. We are using Fargate and love it.

We would like to be able to increase storage over 10Gb or even map task storage to S3.

I was linked to this issue by AWS' support staff when I asked them how to increase the volume size for a Fargate container. Needless to say, I was equally disappointed that this isn't already an option. In my case I was writing a Step Functions state machine that transforms data through multiple phases (it launches Glue Scripts, Lambda functions, and yes - Fargate containers). I ran this pipeline today and quickly bumped into the disk space limit for one of the Fargate containers.

We have a running EKS cluster for some related work as well, so I could write it as a Kubernetes job and then use a Lambda to launch it, and make use of the Step Functions SDK to monitor progress. Or, I could write a bunch of Terraform to convert my ECS cluster into a managed one instead of relying on the defaults and ease of use that Fargate provides. There are probably other ways to go about it, too.

But regardless of which alternative path I choose, it's adding several days to my development cycle, when I was expecting to just be able to set a property in my task definition and be done with it. I am very surprised this isn't supported.

We are also running into this issue. Very surprised and annoyed that there is a hard limit here.

We have this problem and it is quite blocking. Could we know if you're working on it please ?

Same thing here. Fat container image on EKS. Fargate node can't pull.

Same with us, we have containers that needs to run machine learning models and would need storage more than 15GB. Currently tasks are failing with error
"CannotPullContainerError: failed to register layer: Error processing tar file(exit status 1): write /app/data/model_title_dmm_concat_1.trainables.syn1neg.npy: no space left on device"
Increasing this limit will be great. Any idea when is this planned to be addressed?

+1

Probably still not enough for most of use cases but fargate platform version 1.4.0 offers now 20GB storage space: https://aws.amazon.com/blogs/containers/aws-fargate-launches-platform-version-1-4/

Can we extend this to EC2 type also?
It can be generally applied using --storage-opt size=<size> while performing the docker run command, as stated here

We'd like a configurable amount of disk space above 20gb on Fargate on EKS.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mineiro picture mineiro  路  3Comments

clareliguori picture clareliguori  路  3Comments

groodt picture groodt  路  3Comments

tabern picture tabern  路  3Comments

abby-fuller picture abby-fuller  路  3Comments