Containers-roadmap: [Fargate] [Volumes]: Allow at least EFS mounts to Fargate Containers

Created on 13 Dec 2018  ·  154Comments  ·  Source: aws/containers-roadmap

Tell us about your request
Allow mounting of at least EFS volumes (if nothing more generic or extensible) onto Fargate tasks.

Which service(s) is this request for?
Fargate

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We're in en-masse plans to migrate to Fargate. We're 100% in agreement with having stateless containers talking to stable external storage over the network (S3, DynamoDB). We'll do it sooner or later. You won't lose business without this.

This is an empathetic ask - if we could mount at LEAST EFS volumes to support those external workloads (stuff we don't build, but rather download), then it allows a large life-and-shift to Fargate, getting rid of Docker for AWS and ECS and gives us one consistent team-wide technology to consume, while we the factor out those dependencies cleanly.

Are you currently working around this issue?
We use Docker Swarm using the old DFA CloudFormation stack. Looked into ECS before volume plugins were supported and just the 2-3 level steps was awful (create volume, mount on host, remember where it is on host, launch task, mount directory to volume, mount volume to container.)

ECS EKS Fargate Work in Progress

Most helpful comment

I can confirm that we are working on this feature. When it launches, ECS task definitions will include additional options for volumes. These will be available on both EC2 and Fargate launch types. The snippet below is not necessarily the final design but it is representative of the current thinking - please let us know if you have questions or comments! When we get closer to launch, we will move the github issue to 'coming soon'.

(Note: the transitEncryption and readOnly fields are optional).

{
    "family": "my-task-with-efs",
    "volumes": [
        {
          "name": "myEfsVolume",
          "EFSVolumeConfiguration": {
                "filesystem": "fs-1234",
                "rootDirectory": "/path/to/my/data",
                "transitEncryption": "tls"
            }
        }
    ],
    "containerDefinitions": [
        {
           "name": "container-using-efs",
           "mountPoints": [
               {
                   "sourceVolume": "myEfsVolume",
                   "containerPath": "/mount/efs",
                   "readOnly": true
               }
        ]
    }
   ]
}

All 154 comments

ECS now allows for volumes to be mounted at task level, not host only..
Check it out.

@FernandoMiguel : Does that mean that EFS is now supported by Fargate?

Thanks everyone for this request. It would really be awesome if you could give us a little more detail about your need for this feature: For example, which workloads / applications that require EFS would you want to deploy on ECS? Would also love to hear about any potential use-cases or interests in using the newly released FSx file system.

We'd very much like to see support for EFS in fargate. I imagine there's a multitude of applications - the one we have is that we (Idealstack) are doing website hosting in ECS, and want to support common PHP-based web apps such as wordpress, drupal, peoples custom PHP code etc. These typically require shared storage if you want to cluster them and autoscale, and don't support S3 in general. So for instance the AWS reference architecture for wordpress, magento or drupal all use EFS

But I would imagine in any situation where users want persistant storage wth unix filesystem semantics EFS is going to be helpful, particularly where you are moving existing apps into fargate. There's a lot of frustration on the internet over the lack of EFS support in fargate dating to when it was first released. FSx filesystems aren't something we use, but would probably also have similar applications, as would EBS support. Something similar to the new volume driver support for (non-fargate) ECS would be great, even if only certain AWS-managed drivers that supported a few common targets such as EFS, EBS and/or FSx were supported

With EFS in fargate we could support more effective autoscaling of these kinds of apps compared to EC2-based architectures (since a container can boot in seconds versus minutes to create an instance and add a container instance in ECS). This would be a killer feature for our product. Lack of EFS support stops us from supporting Fargate at the moment though.

adding to @jonathonsim -there are quite some open-source services that are not built cloud-native:

  • jenkins (the primary/master, not nodes)
  • grafana (or many other graphing tools)
  • logstash/beats

Many of those tools just need a persistent storage for minimal writes.

As original author, I'll give you some my use-cases:

  1. Private Docker registries (unless ECR allows us to host public/private repos for external distribution) like Harbor or Nexus. They can store blobs to S3, but still need a filesystem for state/config.
  2. Other legacy software that reads/writes config/state to/from disk - Java Webservers, etc.
  3. SQL databases. I know, I know, Aurora and RDS, but just trust me on this. We are a cybersecurity company, so we need to occasionally host containerized databases to host wordpress against. EFS would allow the database to be persistent, but Fargate would allow us to test various SQL injection scenarios against it and mitigations.
  4. Wordpress plugins generate new PHP code on the fly. This can't go to S3. Can't go to DynamoDB. Needs to be persisted under a filesystem.

The counter-pressure on why NOT to use ECS: That escape-hatch becomes an opportunity for hard-work-creep. Opens up new AMIs, custom Linuxes, host drivers, firewalls, authentication, and more. It's too much opportunity opened up only to give someone the ability to mount stage storage so their wordpress plugin can generate code on the fly.

Our use case would be mounting a volume read-only which contains static data (in our case the reference genome) instead of having to put this data in the image or download it from S3 every time we start a container.

This would be extremely helpful. We have a task that we'd like to run in Fargate that currently involves pulling around 30GB of data in from S3 each time it runs; we can do this in EC2 or on ECS containers, but it would save us a ton of headaches if we could load it directly into a Fargate volume.

Yes, this would be awesome! Even this AWS whitepaper on WordPress "best practices" in the "stateless web tier" references using EFS to store plugin files https://d1.awsstatic.com/whitepapers/wordpress-best-practices-on-aws.pdf

Exactly what we are trying to achieve with Fargate. This would allow [plugin/cms] updates to happen on the fly [by WP admins].

Exactly what we are trying to achieve with Fargate. This would allow updates to happen on the fly.

That is actually something that you don't want because than you don't hava atomic deployment.

But EFS in fargate does have use cases. For example shared file cache for compiled templates.

What we need is for WordPress admins to be able to add plugins on their own. Even the whitepaper suggests this in a "stateless web tier".

+1 here, we've been waiting for this feature since Fargate was launched. We have lots of applications with a high ratio of connections waiting to be migrated into Fargate to make use of autoscaling feature but we need to be able to mount the same EFS to these services in order to share some information between containers

ECS now allows for volumes to be mounted at task level, not host only..
Check it out.

That feature isn't supported currently by Fargate.

In addition @abby-fuller IMHO it should be labelled as Fargate.

Having EFS + fargate would allow me to skim off 5-6 minutes of my deployments. It would also provide extra flexibility/agility for adjusting configs during triage time.

I could deploy to the EFS mount with the ability of instant upgrades/rollbacks through something like deployer rather than needing to build a new docker image + deploying that to an ASG/Cluster with hardcoded configs built into them and wait for the replace action to occur in cloud formation.

Our use cases involve running stateful workloads, such as cluster state (software requires file systems), custom databases, and CloudFoundry migration -- for workloads requiring file systems.

Would love this feature, though we are looking at a more mature EKS as well.

Hi, I'd like to ask if there is an ETA for this. Fargate is one of the platforms that we are looking into for migrating our service and NFS/EFS support is a very important feature that our service uses; and knowing the ETA will be very helpful for planning our schedule. Thanks.

It would be really helpful to know more about EFS/NFS volume mounting on containers running on Fargate, and if this is going to be implemented in the near future. I am currently looking for a solution to connect jupyterhub to fargate. For data scientists to save their notebooks in their jupyter instance (running in the container), we need to mount a volume. So they can continue their work the next time they log in. Other options would need us to keep a large EC2 instance running all the time. This would cost a lot for every new user.

It would be really helpful to know more about EFS/NFS volume mounting on containers running on Fargate, and if this is going to be implemented in the near future. I am currently looking for a solution to connect jupyterhub to fargate. For data scientists to save their notebooks in their jupyter instance (running in the container), we need to mount a volume. So they can continue their work the next time they log in. Other options would need us to keep a large EC2 instance running all the time. This would cost a lot for every new user.

I'm doing something very similar with RStudio...

AWS says they are working on EFS support for ECS with Fargate, if I may believe this post: https://forums.aws.amazon.com/thread.jspa?messageID=816397&tstart=0

this feature would allow for rehosting of existing applications with minimal effort. Desperately required!

I would like to operate a FTPS/SFTP server with scalable/durable storage.

Do we have any ETA for that??

We really need this feature. ECS at EC2 is a headache - autoscaling groups, cloudinit, mount directory to host, mount to task... Awful.

ECS at EC2 is a headache - autoscaling groups, cloudinit, mount directory to host, mount to task... Awful.

@teamfighter awful? was 4 lines of code for efs, and another 4 for the task definition

@FernandoMiguel I didnt told that it's impossible. I told that it is uncomfortable solution.

Has anyone tried to mount an s3 bucket inside a container running on fargate with s3fs? This may be a (temporary) solution to persist files to s3. I am currently using s3fs to mount/share files between ec2 instances, and it works like a charm!

Has anyone tried to mount an s3 bucket inside a container running on fargate with s3fs? This may be a (temporary) solution to persist files to s3. I am currently using s3fs to mount/share files between ec2 instances, and it works like a charm!

pretty PLEASE don't use s3fs.... S3 is object storage... trying to treat it as persistent storage is a terrible idea @juultje123

As long as it is only a few 100MB's I don't see a problem. It is not the ideal solution, but if you need some kind of persistence this will probably do the trick. I would not trust database storage to it with high read/write volumes.

Hosting an EFS volume inside a Fargate container would be great. We're using Airflow and I'd like to host the DAGs (task definitions, more or less) in a volume that all of the containers can share. Deploying a new DAG would mean just copying the files into the volume and not having the re-deploy all of our containers.

ECS at EC2 is a headache - autoscaling groups, cloudinit, mount directory to host, mount to task... Awful.

@teamfighter awful? was 4 lines of code for efs, and another 4 for the task definition

ECS is a very leaky abstraction. Why do I have to manage my own VMs and autoscaling when I just want containers? You can make it work but the volume of work required is easily 5-10x what you need for Fargate.

Please add this, we'd like to move our CI to Fargate but it's not possible now. Jenkins uses the filesystem and they won't rewrite the whole architecture to use S3, not going to happen. EFS would solve this issue very cleanly.

We currently use EFS to share web files among multiple web servers running on ec2.

This is the only feature we're missing for us to be able to move our application to Fargate.

@ddiazboxy Exact the same case for us

+1 to this feature.

+1

@chris-ch @chwer @marcossv9 @reidadam

Guys, just 👍 the original post for this issue. +1 are just spam and if they sort issues by popularity to prioritize, your +1s won't make this issue appear in their reports.

Is this actually on the roadmap to be done anytime soon?

@hjames9 you can tell they're working on it by looking at the roadmap:
https://github.com/aws/containers-roadmap/projects/1

It would be possible to mount EFS / NFS volumes if we could run container as privileged or with SYS_ADMIN capability (as one can do in ECS).

One must wonder if mounting NFS really qualifies as an administration task.. perhaps the Docker / Linux view on the matter is a bit too strict..

@dror-g It is not because it is NFS, it is because it is a (kernel) mount, a kernel file system driver (by default), and a privileged service port (by default). If you move/proxy your NFS server to a unprivileged port and access it with a user space driver/client without mounting it, you can go ahead now. Mounting in user space may be possible one day, there is experimental support there in Linux.
https://lwn.net/Articles/755593/
https://www.phoronix.com/scan.php?page=news_item&px=Linux-Unprivileged-FUSE-Almost
https://github.com/sahlberg/fuse-nfs

+1 :D

I have another usecase for a clustered app that needs to log into a central place from where it gets moved to ELK. Seems impossible to do this because of this limitation.

With EBS I cannot share it across Fargate containers. With EFS I cannot mount it within a Fargate Container.

I have another usecase for a clustered app that needs to log into a central place from where it gets moved to ELK. Seems impossible to do this because of this limitation.

This is one of those use cases that shouldn't use EFS at all. Why don't you write the logs to kinesis firehose or something similar?

I have another usecase for a clustered app that needs to log into a central place from where it gets moved to ELK. Seems impossible to do this because of this limitation.

This is one of those use cases that shouldn't use EFS at all. Why don't you write the logs to kinesis firehose or something similar?

Fair point except in this case I am tied to workings of an application that is very hard to change.

What is the status of this feature request? Many customers would like to have it, they won't use Fargate without persistent storage.

+1

This feature would help us immensely. Right now we are stuck using EC2 for half of our ECS containers, and it is a pain in the ____.

+1

Will EBS/local persistent volumes be included with this EFS change?

Will EBS/local persistent volumes be included with this EFS change?

EBS is something different than EFS. EBS will be never supported by fargate.
EFS is network persistent volume

+2
I have two customers that are waiting for EFS on Fargate to become available.

EFS+FARGATE, That will be great.

I think this would be huge. Currently if I want to tun anything that need storage (for example simple Gerrit or Bitbucket) I need EC2 instance (ECS). Which means I need to build an AMI to put security requirements, I need log shipping from instance and manage SSH keys. Then I need to run container on top of it.

If I could mount EFS directly this would make fargate great solution.

"For example, for every person who posts on a forum, generally about 99 other people view that forum but do not post." (via wikipedia)
Source: https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)

Count me and my 99 silent constituents in on the EFS+Fargate request.

K, thanks.

Hi,

I didn't understand from this thread whether this is available or not...

Faragate+EFS would be great

If I could +10 this I would.... glad to see it in the "We're working on it" bucket - many container solutions need persistent file volumes - Fargate needs an answer to this - and mounting EFS locations seems like the peanut butter to Fargate's chocolate...

Huge +1 to this - we need some ability to use larger files

+ a million for AWS to takes this public request seriously

Guys, the button for this is at the top of the issue:
image

Responding when you've got nothing to add just adds noise for everyone subscribed to the issue, and doesn't contribute anything to AWS prioritising it.

I know AWS is data-driven and all, but I can offer an insight into why people comment, despite knowing thumbs-up is available - expressing internal priority and expressing size of internal priority.

Let's say AWS has 10 issues, and there are 10 total customers who want all of them, this is what AWS sees:
Issue 1: 10 thumbs up
Issue 2: 10 thumbs up
Issue 3: 10 thumbs up

and so on...

AWS is working on every single issue in priority, and with complete honestly.

This is what each customer sees:
Customer 1: Issue 8 - 80%, Issue 1 - 15%, Issue 3 - 3%, ....
Customer 1: Issue 3 - 20%, Issue 8 - 10%, Issue 6 - 8%, ....

and so on....

So I get that Github has a feature, AND AWS is honest in responding to it, and still there can be missing data that people want to add. This would help AWS too - if AWS was building features based on numerical priority, all of twitter opinions would change AWS direction. AWS wants to know what feature unblocks most customer potential, and in turn, unblocks most consumption of their services, and in turn, make $$$.

I would content this issue would unblock so much usage of Fargate it's not even funny, and thumbs-up's aren't capturing it, and people just want AWS to know that.

@archisgore: I agree that this is an important issue but spamming everyone with +1 doesn’t convey any of that extra value. I don’t mind messages where someone takes the time to describe a use-case which wasn’t previously covered or, much better, includes what they told their account representative. We’ve definitely told ours about some of the specific services (Prometheus, Solr, cache volumes for image services, etc.) we would be able to migrate if support was added for EBS or EFS.

If you would like to do more to support a feature request than thumb-up the issue (and subscribing), I suggest go talk to your AWS account manager, or file a support request for the feature, or post in the AWS forums, describe your pain-point / use-case. Large AWS customers have priorities too, but my guess is they are not expressing them here in Github issues 😄 So please thumbs up here, in the community, then go also leverage your $$$-relationship with AWS too, by submitting a feature request via support.

Is that a nice way of saying, "You don't matter enough"? Github peasants should express data, but only through upvotes. If we REALLY had any $$$ to spend, we'd have this thing called an "Account Manager".

@acdha I agree. Perhaps a constructive stock-message would be:

  1. Thumb-ups are being heard, so if you only want to indicate you'd like it, please add a thumbs up. 2. We also recognize that may not cover the full context, so if you do comment, please add an additional use-case, or the size of the opportunity, or the blocker. That would help de-conflict two issues with same-thumbs-up priority, but a larger impact.

@archisgore: I agree that this is an important issue but spamming everyone with +1 doesn’t convey any of that extra value. I don’t mind messages where someone takes the time to describe a use-case which wasn’t previously covered or, much better, includes what they told their account representative. We’ve definitely told ours about some of the specific services (Prometheus, Solr, cache volumes for image services, etc.) we would be able to migrate if support was added for EBS or EFS.

Hi everyone, I am a Principal Product Manager at AWS for container services, including ECS and Fargate. As many of you know already, we do pay close attention to this github roadmap and it is an important source of feedback for us. The number of thumbs up on the issue is one source of data. Additional comments with use cases and constraints are also very helpful and are always welcome. Thank you!

We are heavy users of Kafka and now AWS MSK.

Our apps use the Kafka Streams API which has various local state stores to store relevant state for streaming applications. This state can be either in-memory OR use RocksDB to cache the state locally. These state store can easily exceed 10GB, which is the current fargate limit.

We don't personally care about attaching the state across restarts, we treat the stream state as temporary, and Kafka steam will happily rebuild the local state very efficiently between restarts. It really is just a cache for the currency container instance, but that state will grow above 10gb.

At present, we have alleviated the need to grow our disk beyond 10GB for our current customers. We are currently on boarding a customer however that we expect will have enough transactions to easily exceed our 10GB cache. When we hit that point we need to move those services back to raw ECS.

In terms of EFS/EBS, probably don't care so much, but RocksDB is reasonable heavy on the IOPS. We were previously provisioning EBS based on IOPS in ECS land.

@ChadHarrisVerr , I'm not familiar with the setup you have, but does it have native support for caching systems such as Memcached or Redis. If so, you can look into Amazon ElastiCache. I've used that in my own Fargate setup for PHP Sessions data.

Thanks for the reply, but sadly no. KafkaStreams API currently only supports either in memory or RocksDB datastores.

There was a post regarding non-cloud native developed applications like Drupal and Wordpress. Moodle is another one. We use all of these applications to host multiple different content-management and collaboration applications. It is expensive and time consuming to manage the host fleet for these ecs clusters using autoscaling groups. I don't want to worry about whether my group has enough hosts, or this cluster or that cluster needs adjustment to my autoscaling settings. And I don't think we should have to given the obvious feasibility of Fargate as a service.

All we need is to be able to add a little persistent storage to the mix and our lives become simpler, better, and more profitable to AWS. If we get another application that needs to get hosted, it takes twice as long as it should to put it into a properly scaled and scaling ECS-EC2 host cluster. Fargate+Persistent storage means I spend half the time and can handle twice the new collaborative application requests in idealized circumstances.

Upvote for EFS mounts on Fargate would be great to be able to have services like Jenkins and other things that require this functionality.

I have liked the top of this post. I and multiple DevOps engineers at my company ($3.9billion educational technology firm) have been waiting for this since Fargate came out. Many use cases for persistent and/or shared volumes and S3 is just too slow to be a viable solution. We hope for a near-term solution to address need of shared volume for multiple applications currently managed in EC2/ECS with Lambdas for graceful auto-scaling needs. We have discussed the need with our TAM and will be escalating this request soon, as we really want to get 20+ applications already in production migrated to Fargate.

Ditto all of the above....AWS Engineers should be able to see the plethora of use-cases for need of Shared Volumes in Fargate.

You can see the current status here: https://github.com/aws/containers-roadmap/projects/1?card_filter_query=53
Currently: We're Working On It

Two of our services need to download files from s3, process them and then return them back to s3.
Those files are large, well beyond Docker / Fargate volume limits.
These are the two last services out of 30 that we couldn't Dockerize with the only reason being lack of persistent storage support in Fargate and are holding back release of 28 remaining services.
So this feature is essential for us.

I was trying to get Nexcloud working on Fargate and wasn't aware of this limitation. So much wasted time 😢 I bet if I go another road now, the next day this feature will land.

I'm trying to run an open source database on Fargate. The ability to have data stored locally is, obviously, critical. Fargate is dead in the water for me with its 10GB of storage and no upgradeable capacity.

Running a database over NFS might not be the greatest idea.

On Tue, Sep 17, 2019, 1:16 PM Colin notifications@github.com wrote:

I'm trying to run an open source database on Fargate. The ability to have
data stored locally is, obviously, critical. Fargate is dead in the water
for me with its 10GB of storage and no upgradeable capacity.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/aws/containers-roadmap/issues/53?email_source=notifications&email_token=ABAAQVI6SAWIRQTLUVDPB2TQKEGIBA5CNFSM4GKCYBV2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD65H3PA#issuecomment-532315580,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABAAQVKNN74GOTGBKU7BIZLQKEGIBANCNFSM4GKCYBVQ
.

edit: Well, maybe so. It would solve my use case though. You're right that something like EBS would be better.

Here is a use case where the persistent data _is_ stored in S3: We use a commercial Git LFS application that is distributed as a container and using S3 as a storage backend. When doing a git lfs commit, the files must first be stored on the container filesystem before being moved to S3. We have single files larger than 10GB, which excludes us from using Fargate. This pattern is fine for a sufficiently large enough EBS volume on an ECS EC2 instance, but introduces maintenance and engineering overhead.

We run Atlassian's Jira self-hosted, so I'd prefer to use Fargate so that I don't need to manage the system. I'm currently using EC2 with EFS for that.

I think the original vision for fargate was something a little more on the ephemeral side, but it also seems that the concept of managed host clusters for containers is just too much of a good thing to limit to ephemeral use cases. I'd host a lot more things on fargate if I could. Gitlab installations, Solr servers, you name it. If it has persistent storage needs, I host it somewhere I can do that, if it doesn't, it goes into fargate.

We use ECS and EC2 instances to process large data files (>10GB) as they come in. It would be nice to use Fargate and EBS or EFS volumes to do this without then having to worry about EC2 autoscaling for the times of downtime. Most, if not all, of our tasks are called from SWF and work nicely with the 'task' based processing.

Right now we can't use fargate (or lambda) for processing these large files and things sit idle when not being used.

The lack of this feature means we will need to go with EC2 for ECS which is a shame as Fargate is exactly what we wanted. But the application is a Drupal CMS, and EFS is a must have.

Does anybody know if this feature is like 2 months out, or 6 months out, etc?
Thank you.

The lack of ECS/Fargate team feedback is a little disheartening. The roadmap board still just says "we're working on it" with no updates or information. The issue is fast approaching a year old, with very little activity or feedback provided from the AWS teams involved. Throw us some bones friends.

@jestrjk, totally agree with you.
On AWS Summit Warsaw (in May) I spoke about this feature with AWS Architects and I get the answer - "very soon".
This very soon is almost 5 months already so I would love to hear more because it is really important feature I believe (at least for me :D ).

Please update ETA for this feature. It helps us to make some decisions.

I can confirm that we are working on this feature. When it launches, ECS task definitions will include additional options for volumes. These will be available on both EC2 and Fargate launch types. The snippet below is not necessarily the final design but it is representative of the current thinking - please let us know if you have questions or comments! When we get closer to launch, we will move the github issue to 'coming soon'.

(Note: the transitEncryption and readOnly fields are optional).

{
    "family": "my-task-with-efs",
    "volumes": [
        {
          "name": "myEfsVolume",
          "EFSVolumeConfiguration": {
                "filesystem": "fs-1234",
                "rootDirectory": "/path/to/my/data",
                "transitEncryption": "tls"
            }
        }
    ],
    "containerDefinitions": [
        {
           "name": "container-using-efs",
           "mountPoints": [
               {
                   "sourceVolume": "myEfsVolume",
                   "containerPath": "/mount/efs",
                   "readOnly": true
               }
        ]
    }
   ]
}

That is good news. I assume EBS support will be a part of this work?

That is good news. I assume EBS support will be a part of this work?

EBS support is not included in this feature - this is EFS only.

There is another issue for EBS support: https://github.com/aws/containers-roadmap/issues/64

If you are interested in EBS support, please comment on that issue, thanks!

I’m so happy to see this is planned for both EC2 and Fargate tasks!

I assume the rootDirectory is a path inside the EFS volume and not where to mount the volume on the host (since you wouldn’t know where to mount things on Fargate). If so, out of curiosity, where does the EFS volume get mounted on EC2 hosts?

@borgstrom Correct, similar like this:

mount -t efs fs-abcde:${rootDirectory} /path/to/local/mount

If so, out of curiosity, where does the EFS volume get mounted on EC2 hosts?

We're still working out the details, but the pattern should feel familiar. We'll create an nfs/efs mount on the host that can then be mounted into the container(s) that use it.

Will EBS/local persistent volumes be included with this EFS change?

EBS is something different than EFS. EBS will be never supported by fargate.
EFS is network persistent volume

@bordeux Looks like you're wrong about this.
https://github.com/aws/containers-roadmap/issues/64

EFS is so slow and limited and expensive. We really need support fora selection of volume plugins such as netshare for SMB shares, EBS, SSHFS.

EFS is so slow and limited and expensive. We really need support fora selection of volume plugins such as netshare for SMB shares, EBS, SSHFS.

I thought @melaraj2 was _shitposting_, but we have to be honest; EFS performance is not ideal to many projects:

EFS is up to three orders of magnitude slower than the EBS counterpart from which you want to migrate [Lawrence McDaniel]

Anyway, allowing Fargate containers to mount EFS filesystems is a HUGE progress, even just to execute tasks within the EFS filesystem.

Imagine you want to backup the EFS filesystem syncing it to a S3 Bucket (or execute any periodic tasks inside the EFS content). Instead of, having to imagine how to execute this task on one of the hosts that might (or might not) mount the EFS, or fire up a EC2 Scaling Group to mount the FS and execute the tasks and terminate;

Execute this kind of tasks on containers, feels more natural.

Anyway, allowing Fargate containers to mount EFS filesystems is a HUGE progress, even just to execute tasks within the EFS filesystem.

Imagine you want to backup the EFS filesystem syncing it to a S3 Bucket (or execute any periodic tasks inside the EFS content). Instead of, having to imagine how to execute this task on one of the hosts that might (or might not) mount the EFS, or fire up a EC2 Scaling Group to mount the FS and execute the tasks and terminate;

Execute this kind of tasks on containers, feels more natural.

We actually tried to use EFS. We have hundreds of thousands of small files 1 to 5KB, and the backup process was taking longer than a day. It was impossible to use, and even if you increase the throughput, it made very little difference, and because the pricing is set by throughput consumption, it was crazy expensive.

For Fargate to be complete, it needs the ability to do storage plugins, most importantly EBS, and since I am venting, why is EBS still only in one AZ, it's over 10 years old, and AWS has storage technology in Aurora that spans zones, when is EBS going to cross zones. It would really help deployment of highly available systems like MongoDB.

Allowing Fargate containers to mount EFS filesystems it's quite nice for who are deploying Apache Airflow into Fargate. You can easily share your dags into containers using the same filesystem. Without EFS and using Fargate it requires an extra container with Mount Points to share the files.

@hudsondba - The other option is to use cron to sync the DAGs from S3.

@ericandrewmeadows It's true but it's not a good practice to run cron jobs inside docker containers. Today I'm solving this problem with another container inside the same task definition sharing the same volume and syncing with GitHub.

We would really love to be able to mount EFS with Fargate for a particular use case -- it would really be ideal. We'll have to jump through hoops to do it a different way.

Is this days/weeks/months out?

any update aws team? maybe there will be some announcement at ReInvent this week? fingers crossed :)

Is any update on efs mounting with fargate.

Any Update ??

@jjrdev @surajtikoo @monaneuro Every time you make a comment, you're sending mail to hundreds of people. If you're just asking for a status update, that doesn't help anyone — it would be much better to contact your AWS account rep instead.

There has not been a single commit since this issue was opened and it perhaps means its not even on Fargate teams radar :disappointed: (I hope I am proven wrong by the team :wink: )

As the issue status says, we are actively working on this feature. We will share updates here when they are available.

As the issue status says, we are actively working on this feature. We will share updates here when they are available.

It will be good to provide some ETA on this as many have asked before. Its 1 year now and somehow its expected that everyone should just wait for updates (even ETA) .... indefinitely

It will be good to provide some ETA on this as many have asked before. Its 1 year now and somehow its expected that everyone should just wait for updates (even ETA) .... indefinitely

When AWS team said they will do this, they will do this. You do not need remind them every one year about this :)

@bordeux good one :D ahahaha.... let's not remind them every year about that feature guys! Let them work on implementation.

It seems an EFS preview has been released in the latest ECS Agent:
https://github.com/aws/amazon-ecs-agent/pull/2301

So it is enabled for EC2 instances using the latest agent, maybe with some hope we can expect it soon in Fargate as well 🤞

Update: we plan to support the newly-announced EFS access points and IAM authorization features in ECS.

Update: we plan to support the newly-announced EFS access points and IAM authorization features in ECS.

Does that mean we will finally be able to use efs as a volume in a fargate container?

It will be good to provide some ETA on this as many have asked before. Its 1 year now and somehow its expected that everyone should just wait for updates (even ETA) .... indefinitely

When AWS team said they will do this, they will do this. You do not need remind them every one year about this :)

Amen!

It will be good to provide some ETA on this as many have asked before. Its 1 year now and somehow its expected that everyone should just wait for updates (even ETA) .... indefinitely

When AWS team said they will do this, they will do this. You do not need remind them every one year about this :)

Amen!

Thanks for contributing!

Hi all - we have launched PREVIEW support for this feature in ECS with the EC2 launch type. During the preview, only EC2 launch type is supported. However, we are working on adding support for Fargate during this period. Here is the blog post announcing the preview:
https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-ecs-preview-support-for-efs-file-systems-now-available/

When it becomes available for ECS on Fargate, the EFS volume configuration in the task definition will look the same whether you are running tasks on EC2 or Fargate.

This will allow more types of workloads on Fargate, great feature, highly
anticipating!

On Fri, Jan 17, 2020, 13:48 Nick C notifications@github.com wrote:

Hi all - we have launched PREVIEW support for this feature in ECS with the
EC2 launch type. During the preview, only EC2 launch type is supported.
However, we are working on adding support for Fargate during this period.
Here is the blog post announcing the preview:

https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-ecs-preview-support-for-efs-file-systems-now-available/

When it becomes available for ECS on Fargate, the task definition will
look the same whether you are running tasks on EC2 or Fargate.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/aws/containers-roadmap/issues/53?email_source=notifications&email_token=ACHPL6GLZOFP23I2LT76H6TQ6IRTHA5CNFSM4GKCYBV2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJJCBQA#issuecomment-575807680,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACHPL6FGB5L5D6ZXDDCCX6DQ6IRTHANCNFSM4GKCYBVQ
.

Is there an ETA on when EFS will be available on Fargate? Highly anticipating this feature as well

@abakonski literally a few posts above yours

image

@coultn Hey: could you please consider locking this thread so that the >1000 of us can still get a notification when it's done, but we also don't get e-mailed every time some doofus decides AWS owes them, specifically, a timeline? Otherwise I'm guessing there's a good number of people like myself that are just going to unsubscribe and read the AWS blog instead which kinda defeats the purpose of having this GitHub Issues mechanism.

(For the love of all that is good and holy if you agree please express that with a reaction emoji and not a reply.)

@lvh it can sometimes be frustrating to get notifications with content irrelevant to you, but disrespecting other members in front of >1000 participants is unacceptable.

We are looking forward to this feature.

I already had EFS working on ECS with EC2 instances, via user data to mount the EFS, then using volumes. What benefit does this new way give me? Was so hoping for a "Developer Preview" to drop for Fargate. Edit: Easier config. I'll try out the preview

The benefits of having it in the task definition are (1) no need to modify user data, the EFS volumes will “follow” your tasks around as needed (2) we are building in support for EFS IAM Auth and Access Points, which give you additional security controls on a per-task basis; this could not be achieved using instance user data or only managing EFS at the instance level (3) when we do launch support for Fargate, you can use the same task definition with EFS volumes and it will run on EC2 or Fargate launch type (assuming compatibility with Fargate)

Hi all - we have launched PREVIEW support for this feature in ECS with the EC2 launch type. During the preview, only EC2 launch type is supported. However, we are working on adding support for Fargate during this period. Here is the blog post announcing the preview:
https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-ecs-preview-support-for-efs-file-systems-now-available/

When it becomes available for ECS on Fargate, the EFS volume configuration in the task definition will look the same whether you are running tasks on EC2 or Fargate.

I am a little confused. I understand this is available for the EC2 launch type in preview, but this issue is specific to Fargate. So does it really make sense for this issue to be in the "developer preview" part of the roadmap?

@richardgavel just fully read: https://github.com/aws/containers-roadmap/issues/53#issuecomment-575807680

Hi Team,
It would be great if you can specify any date or timeline for this.

Adding my use case to try and raise the priority on this request.

There are tons of docker images out on DockerHub that assume you have part of the filesystem persisting as a mounted volume.

We're currently wanting to use Oathkeeper for one project, and RabbitMQ for another.

But the specifics of the project shouldn't really matter. Using volumes for persistence is a normal thing to do in Docker and would add great value to Fargate.

finally got around to trying this and i'm getting an error: EFS Volumes are not supported when networkMode=awsvpc

is this only temporary until actual fargate support is in beta/released? i didn't see this limitation mentioned anywhere for the EC2 launch type.

@hlarsen see #53 (comment)

the EC2 launch type not supporting the awsvpc network mode isn't mentioned anywhere that that comment, nor on the blog post, which is why i asked. i'll go ahead and assume that awsvpc on isn't supported on the EC2 launch type at this time since it is the only Fargate networking mode and Fargate isn't yet supported.

When will it be available for Fargate?

I need it as soon as possible please

I'm testing a possible workaround.
My idea is to use AWS Codebuild with a custom image, in Codebuild is possible attache EFS now. If you need to do only background task is something similar to Fargate.

Thanks everyone for this request. It would really be awesome if you could give us a little more detail about your need for this feature: For example, which workloads / applications that require EFS would you want to deploy on ECS? Would also love to hear about any potential use-cases or interests in using the newly released FSx file system.

I my case, i am trying to setup a pgAdmin4 webserver in ECS Fargate, to act as a PointOfEntrance to our RDS Aurora Serverless Postgres 10.7 cluster (which cannot be publicly accessible..). And the "official" pgAdmin4 (https://hub.docker.com/r/dpage/pgadmin4) stores all users, usersettings etc. on the file system. So, currently, these are lost on every reboot/deploy of the container

You could sync them to and from s3 in your entrypoint script. Far from ideal i know, but it would work

You could sync them to and from s3 in your entrypoint script. Far from ideal i know, but it would work

Thank you for making me aware of this possible workaround

Super excited to announce that this is now generally available, including full support for ECS with both EC2 and Fargate launch types: https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-ecs-aws-fargate-support-amazon-efs-filesystems-generally-available/

Excellent news, thanks for the update.

Amazing work.After a long time. Happy to hear

Thank you @coultn! Been waiting for this one for a while.

Great news!
Looks like you have to manually change platform version on ecs service to 1.4.0 - it is using 1.3.0 if I keep "LATEST". But I might have been just too fast :)

@coultn After reading the blog post, it suggests that this support is for ECS-only workloads, to cover both EC2 and Fargate launch types. No mention is made of Fargate workloads in EKS. So, I'm assuming that's out of scope for this release? If so, that's fine. Just trying to clarify because I also see the EKS label on this issue and the issue was closed.

Great! Is there a corresponding item to follow for CloudFormation support?

@coultn After reading the blog post, it suggests that this support is for ECS-only workloads, to cover both EC2 and Fargate launch types. No mention is made of Fargate workloads in EKS. So, I'm assuming that's out of scope for this release? If so, that's fine. Just trying to clarify because I also see the EKS label on this issue and the issue was closed.

You are correct in your assumption @mikesir87. We are working to enable this scenario for EKS. Stay tuned.

Opened an issue for CloudFormation support
https://github.com/aws/containers-roadmap/issues/825

We are working to enable this scenario for EKS. Stay tuned.

Is there an issue that we can follow for that support @mreferre?

@mreferre Thank you for delivering this for ECS.

I'm glad to hear EKS support is on the way too; I have created a new issue to track that specifically: https://github.com/aws/containers-roadmap/issues/826

Amazing that EFS got to be supported, what about EBS? 😢 @coultn

Is this possible to configure with CloudFormation?

Is this possible to configure with CloudFormation?

@synth not yet but we are working on getting that support shipped asap. We do know it is in high demand. Stay tuned.

good news! this's what we're looking for to migrate some of our existing stateful workload to Fargate

Is this possible to configure with CloudFormation?

@synth not yet but we are working on getting that support shipped asap. We do know it is in high demand. Stay tuned.

Have been trying to find a way to get this working and only now found this thread. Now it makes sense why it has been failing for me from CloudFormation. Eagerly awaiting this feature in CloudFormation

@mreferre Is there an ETA for Cloudformation support of EFS for Fargate? We are eagerly waiting for that feature

@mreferre Is there an ETA for Cloudformation support of EFS for Fargate? We are eagerly waiting for that feature

@uherberg we are actively working on it. I don't have more details to share at this time. Stay tuned. We will update this post when CloudFormation support is introduced. Thanks for your patient.

are we able to use EFS mounts from other accounts?

we have a pair of VPCs peered and mounting the EFS share cross-account works on ec2 (after fixing DNS by specifying the IP of the mount in /etc/hosts), but for Fargate we're only passing the mount name.

@hlarsen this won't work because of DNS resolution. It would work with a shared VPC among the two accounts though but not with two separate VPCs. Can you open a new GH issue with this specific request so that we can track it? Thanks.

Was this page helpful?
0 / 5 - 0 ratings