I'm trying to see if user defined networks is supported?
I've looked at the task definition options and could not find any place to set the network the container should connect to?
Is it supported yet?
User-defined networks are not yet supported on the task definition. Can you help us understand what you'd intend to use them for? Are you looking for something like service discovery, security isolation, or something else?
Mostly in interested in the automated service discovery part where I can setup predefined domain names for containers and connect my services via them.
Unfortunately this only works using user defined networks.
Currently I'm setting up a host DNS server which then scans the running containers and updates the DNS entries manually which is not ideal.
I am running into wanting this too for the service discovery aspect, I see it supports container links but I was of the understand that it was deprecated now in favor of using networks, is this something that will be implemented soon?
my usecase is basically a bidirectional linking (see http://stackoverflow.com/questions/25324860/how-to-create-a-bidirectional-link-between-containers)
my use case is I would like to be able to scale containers in services separately, but have communication to other containers in a different service/task definition. If multiple different task definitions were able to connect to a user defined network, all containers across different task definitions would have network connectivity on that user defined network by hostnames.
Would really love to see this. Currently Service Discovery is a huge pain requiring yet another service (which itself is usually cluster-based and self-discovers and then listens for other services). It's a messy solution, not to mention the Lambda "solutions" that are even more obnoxious to implement and maintain.
ECS needs native service discovery support out of the box. Specific example would be clustering services such as RabbitMQ or similar services.
+1 to seeing this in place.
At a minimum passing through the equivalent of the --network docker run arg would be useful I think, defined in the container definition most likely.
I believe this needs to be looked into with a higher priority. The legacy links feature is currently deprecated and may be removed. This warning is in place on the documentation for the feature.
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
+1 - Really need this feature to create mysql replicas without putting on same host/task.
+1 Linking is going away, and there are many services which require knowing their 'externally reachable host address (host ip + external port)' at runtime, which theoretically could be solved with user-defined networks
+1 I would very much like to be able to define my own network instead of being forced to use either 'Host', 'Bridge' or 'None'. The agent doesn't even need to create the network, just allow me to put in a network name that's custom and then at runtime see if it fails to start because the network doesn't exist.
I need to route traffic through a container that is running a VPN client. That way the actual containers can be used without modification when they need to use a VPN. Similar to the --net=container:network option that has removed from Docker.
+1 In docker links are indeed already deprecated
๐ need this for elasticsearch nodes
๐ need this for hazelcast
:+1: Would be useful for ZooKeeper.
๐ Consul ... service discovery
๐ Use case for us is an nginx reverse proxy container which sits in front of an upstream API service running in another container on the same host. Currently our only option is using the deprecated link feature over the bridge network, or using something like DNS/ELB/Consul. But obviously we'd like to avoid making a network hop to call something that's running on the same host.
A major disappointment I have with most (all?) orchestration tools is the assumption that all containers will be mapped to ports on the host. With overlay networks, this is not necessary. Containers can communicate within the network on ports that are not exposed or mapped to the host. This is clearly preferable as it almost completely eliminates any sort of port management and the possibility for port conflicts.
Start your containers in an overlay network, listen on standard ports (i.e.: 80/443) without worrying about conflicts, and setup a proxy to forward requests to your containers by name. Map your proxy to host port 80/443 and point your ELB at it. Manage it all using your service discovery DNS. This is the most elegant and maintainable solution, yet most orchestration tools will not support it. It's a crying shame. Literally, I am crying over it.
I shudder to think about managing 10,000 containers with port mapping. If each container exposes two ports, that's 20,000 ports I have to manage! Oh, I can make them map to random host ports, but now my proxy logic is so much more complicated, and someday I'll simply run out of ports. The bottom line is that a "scalable" solution that's built on port mapping is not scalable -- because mapping ports is not scalable.
I have modified the ECS agent to support this, and it works perfectly for my needs. However, it's less than ideal, because I lose the regular updates to the agent, unless I continually merge them in, and I have little to no visibility or control into the networks from the console or the CLI.
Guys, let's ditch the port mapping nonsense. It's not necessary with overlay networks.
@samuelkarp Is this currently in the works?
For anyone trying to do service discovery, take a look at the following article:
https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/
From what I understand, you can use a single application load balancer to load balance up to 75 services by assigning a unique path prefix for each service, which you can then use to address your services. This doesn't cover all use cases, but should be enough for many applications.
@elasticsearcher We're currently working on the ability to attach an ENI to a task and use native VPC networking. We believe that this will address many of the use-cases described in this issue, as well as provide integration with existing VPC network topology and features that people are using today.
If you're interested in details, check out https://github.com/aws/amazon-ecs-agent/pull/701 (description of how we're planning to do this), the dev branch of amazon-ecs-cni-pugins (where we're working on some of the software behind this), as well as https://github.com/aws/amazon-ecs-agent/pull/777 and https://github.com/aws/amazon-ecs-agent/pull/779 (some of the changes necessary in the agent for this feature).
๐
simply exposing --net=my-user-defined-network in container definition, and adding user defined network in task definition is most appropriate.
My use case assumes certain containers will join user defined networks and call each other by host. This setup is meant to run both outside of and inside AWS (through various development phases). No need to reinvent the wheel. Please support whats already there.
๐
simply exposing --net=my-user-defined-network in container definition, and adding user defined network in task definition is most appropriate.
My use case assumes certain containers will join user defined networks and call each other by host. This setup is meant to run both outside of and inside AWS (through various development phases). No need to reinvent the wheel. Please support whats already there.
We require a number of containers to be bundled together with open communication, only exposing what needs to be consumed by the outside world. Link is ugly and not scalable, and we need to be able to set the networks within our task definitions. No need to over engineer whats already available.
Any updates here? - this is a really needed feature
This is much needed feature. I don't understand why AWS does not agree with the users. The use case is fairly common. Let's say you have a database container (serviceDB) that needs to be connected by multiple app containers (serviceApp). Put the database container and app container in one task definition and link them is not going to work.
Surprised no one's mentioned Weaveworks' integration with ECS, because it does pretty much what everyone here is asking for:
https://www.weave.works/docs/tutorials/old-guides/ecs/
Basically, Weave assigns an IP address to each container and runs an auto-managed DNS service, which lets any container in the same cluster address any other container by its name. The DNS service also automatically load-balances all containers.
I just tried it out and haven't encountered any issues so far. Just had to examine the ECS cluster setup script that they provide in the example to figure out the required SG and IAM configs.
Does anyone have experience with Weave and ECS? Any feedback would be super helpful.
@errordeveloper or @2opremio, would you mind chiming in please? I thought I'd loop you in since Weaveworks' solution seems to perfectly address this long-standing ECS feature request. Are there any limitations/concerns that we should be aware of or it's stable enough to use in production? :)
Yes, Weave Net should be able to solve most (if not all) the use cases presented above. It's production ready and we provide AMIs and Cloud Formation templates to run it.
See
https://www.weave.works/docs/scope/latest/ami/
https://www.weave.works/docs/tutorials/old-guides/ecs/
https://www.slideshare.net/mobile/weaveworks/weaveworks-at-aws-reinvent-2016-operations-management-with-amazon-ecs
Thanks, @2opremio, that's great to hear! Weave Net makes connecting containerized apps so much easier.
That looks great in the interim - but it doesn't change the fact that ECS needs overlay networks if it wants to stay relevant.
I agree @jamessewell
We have a similar issue with eJabberd. We ended up deploying Kubernetes onto AWS using KOPS. It makes all of this trivial.
Another example use case. I am hosting a container which runs third party code where I want to restrict all outgoing routes. I lock down the container with iptables but lets say the third party code exploits a vuln and gains privilege escalation. Now they can override container iptables and get out to the net. If I had a user defined network that forced the container to use a restricted gateway I would not have to worry about the latest exploit taking down my whole stack. Put in the user defined networks por favor.
+1 I just want to be sure that my ECS task links still work, or have an equivalent, when Docker eventually removes the deprecated legacy links feature.
+1 on this. I'm running a node.js application that uses a RabbitMQ service in a separate container. I can't figure out how to make the two containers talk to each other without knowing the IP addresses in advance; which makes no sense, because the IP addresses are assigned at container creation (both are running on the same host).
This was trivial using docker-compose inside an EC2 instance, but ECS doesn't seem to do it.
+1 as well. Crazy this isn't supported :-O
Has anyone switched to something really similar to ECS like HashiCorp Nomad over this issue? How'd it go?
+1
I'm interested in networks so that I can isolate tasks from each other. Per the docs:
Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings.
That's not ideal from a security perspective. By default, I don't want containers to be able to communicate with each other in production.
You might want to have a sidecar container running on the same network as the app container. The app should not be visible to anyone except the sidecar container. User defined networks is the right choice for that and docker compose / swarm already support it.
@juliaaano to clarify, we were planning on having the app code stored in a data-only container and then mount it as a volume to another container. Is this what you're referring to or something else?
(looking at these two articles: https://www.voxxed.com/2015/01/use-container-sidecar-microservices/ https://aws.amazon.com/blogs/compute/nginx-reverse-proxy-sidecar-container-on-amazon-ecs/)
@nathanielks that's one valid pattern. In my case I need a sidecar to do the SSL termination.
Task networking was finally released last week. The new awsvpc network mode lets you attach an ENI to each task, which makes them addressable by the DNS name of the ENI.
Thanks @samuelkarp for mentioning this being worked on a while back.
I saw that announcement and thought it was great! My main concern/question is surrounding the number of ENI's that can be attached to a single instance and how that relates to the number of containers you can launch on a single instance.
Let's take a c5.xlarge for example. It has 4 vCPU's and 4 attachable network interfaces. Each instance needs 1 interface to connect to the VPC, so that leaves us with 3 interfaces. Let's also say we're using an nginx container and set worker_processes to 1. In my mind, to take advantage of how nginx handles CPU processing, it would make sense to launch 3 nginx containers on the other 3 network interfaces with the thought that they would each get their own CPU core as well. This would also leave 1 core available for the system. Great! Everyone gets enough resources. This feels like I'm underutilizing a c5.xlarge though, to just run 3 nginx containers on a single instance.
Is my thinking flawed? Is there also enough room for other containers to be launched on the instance and not be bottlenecking the CPU because everyone is vieing for resources? I haven't found any good resources on how to size your containers so I'm searching in the dark.
@nathanielks Indeed, the limit on the number of ENIs you can attach to an instance is quite small, I didn't realize this was the case. As per the docs:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. EC2 instances have a limit to the number of elastic network interfaces that can be attached to them, and the primary network interface counts as one. For example, a c4.large instance may have up to three elastic network interfaces attached to it. The primary network adapter for the instance counts as one, so you can attach two more elastic network interfaces to the instance. Because each awsvpc task requires an elastic network interface, you can only run two such tasks on this instance type. For more information about how many elastic network interfaces are supported per instance type, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.
Great point, @nathanielks -- unless there's more than meets the eye, the ENI limits are pretty much a non-starter for task networking.
My previous comment was a bit of a rant against port mapping, but my attitude has softened since then.
I've just submitted to the fact that I have to use host port mapping, and I'm fairly satisfied with the solution, given AWS will manage it for me with Target Groups and ELBv2. The issue of intra-host (and even inter-host) container communication is still there, but that can be addressed in a few different ways. This approach is not so bad if you are able to automate everything related to deployment, and the performance is better than you'd get with overlay networks.
My implementation is somewhat similar to this approach:
https://aws.amazon.com/blogs/compute/service-discovery-an-amazon-ecs-reference-architecture/
I'm using ELBv2 to avoid having so many ELBs, and if you use an automation tool (e.g.: Terraform) then the Lambda for service registration is also not necessary. Without these adjustments I wouldn't be fully satisfied, but with them (and automation) it is a totally manageable solution. Complete automation would be more difficult (or even impossible) if you're not using Route53.
In any case, support for overlay networks is still necessary as container links are deprecated. Given the Docker API and the Go docker client already support it, there really shouldn't be much trouble to implement it (barring edge cases, potential security concerns, etc).
FYI, AWS Route 53 just released the Auto Naming API for Service Name Management and Discovery:
https://aws.amazon.com/about-aws/whats-new/2017/12/amazon-route-53-releases-auto-naming-api-name-service-management/
EDIT: the auto naming API requires the use of the awsvpc network mode and thus has the same ENI limitations.
Just been experimenting with ENI networking for containers. Not wildly impressed; overly complex for my simple use cases. Add me to the list of people who would prefer to use user defined networks.
Yes please. Add me to the list. It is not possible to link/volumes_from in awsvpc and this is a problem for us.
+1 use case: I am trying to start reportportal on AWS (which is using consul) and without this feature a simple 30 minutes task is exploding badly
I ended up using the ec2 machine user_data to hack this around somehow... :-(
docker network create rp_net
echo -e "* * * * * for i in \`/usr/bin/docker ps | /bin/grep -v CONTAINER | /bin/grep -v ecs-agent | /bin/awk '{print \$1}' \`; do /usr/bin/docker network connect rp_net \$i; done > /tmp/cronout 2>&1" | crontab -
+1 for exposing --net=my-user-defined-network in container definition
+1 this can really be useful
Weave panics the 4.14.33-51.34.amzn1.x86_64 kernel (Amazon Linux AMI 2018.03) forcing us to roll back (painfully) to the previous kernel. That prompted me to look at docker swarm, which works fine on that kernel, but the lack of a "--network" flag support by ECS brings us to a decision of whether to dump ECS entirely. The VPC network solution is interesting from a security perspective for externally exposed ports, but doesn't adequately provide what most of us are asking for: an overlay network for containers on different ECS instances to be able to talk to each other. Weave can do that, but a bit of a hassle to configure compared to docker swarm. I wanted a light weight solution instead of being forced to look at DCOS or Kubernetes or even PCF - which all look heavy when all I want is an overlay network.
+1
Hi there, is there any update on being able to use user defined networks on ECS?
It seems like this is pretty essential to the service being friendly to use/in line with Docker recommendations for communication across containers.
Please add this functionality. Spent two days trying to get awsvpc to work. Can't believe this is so difficult getting two containers to talk to each other in ECS. As a non-expert in IT/networking, ECS documentation is severely lacking in its ease of adoption. Additionally, I would rather not be forced to use & pay for VPC/NAT Gateways just to get containers to talk. Is there any way around this?!?
Service discovery works for bridged network tasks now I believe. But using load balancers is the best way to handle inter service communication in ECS right now.
Three days spent trying to get awsvpc to work and nothing is any more clear than when I started. It took me all of 10 minutes to run a private user-defined-network on my local machine with Akka remoting, only exposing ssl/websockets to the host, using docker-compose.
As far as I can tell, the AWS docker rabbit-hole: an ECS cluster with awsvpc & service discovery requires...
This doesn't include all the network interfaces, route tables, vpc that ties it all together. All this to find out how TINY a limit the amount of elastic network interfaces, and thus, containers, I can actually run without buying more un-necessary EC2 power from AWS. c5.xlarge gives you only 4 elastic interfaces?! (one for the host, so 3 containers max)
I don't even know if this really outlines everything required. I can't tell if it's correct as I wasn't able to get it working. All this seems to offer is force me to use more AWS services in a design that's un-necessarily complex. I think I should just manage docker myself. ECS needs some serious UX love to make things easier.
Just use Weave Net
Is there any update on user defined networking with ECS? With the advent of daemon-style tasks (https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ecs-adds-daemon-scheduling/) it would be nice to use user-defined networking to allow simple service discovery; you can count on containers being deployed on each instance, and simply communicate with their hostname. This solution is elegant and simple; every other solution is more engineering, more services to monitor and deploy. The next best solution is container linking, a deprecated feature. We have already run into ENI limits on instances. I'm astounded this issue is nearly 2 years old with no update besides "use a third party solution."
I honestly moved to Azure AKS because of this issue. Got it working within 30min - although I had to learn a bit of Kubernetes.
The ENI constraints are absurd. Perhaps try looking into Amazon EKS. Although they charge per-hour for each cluster.
This issue has been open for two years, and is the second most commented open issue. The silence from the AWS is deafening. Can we please get some kind of an update?
We use Weave with ECS..... but there was recently an issue with a kernel
patch on Amazon Linux (right about the time 2018-03 was released) where the
kernel patch was broken such that weave would panic the kernel - resulting
in ground-hog day style continuous rebooting.
After reverting my instances using snapshots with the older kernel, I had
to wait a couple weeks for Amazon to provide a kernel patch that would work
with Weave. In the mean time, I experimented with docker swarm and was
impressed with how well the overlay networks simply work.....it just didn't
work with ECS. I almost got to the point of experimenting with ECS
instances custom built on Ubuntu linux.
It sure seams that Amazon is purposely ignoring supporting overlay networks
in favor of their solution using additional network interfaces on instances
and the stuff associated with that (and the unfortunate issue that
instances are limited to a rather small number of ENIs). It's really
making me want to take a serious look at Kubernetes and see whether EKS is
worth learning or not and move away from ECS issues.
On Thu, Jun 21, 2018 at 3:21 PM, Brian Ploetz notifications@github.com
wrote:
This issue has been open for two years, and is the second most commented
open issue. The silence from the AWS is deafening. Can we please get some
kind of an update?โ
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/aws/amazon-ecs-agent/issues/437#issuecomment-399247305,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADu6EcXttn2zfQB9zTdApwDOtOKYATeBks5t_A5PgaJpZM4I-UO7
.
The older --link feature is being removed from Docker in favor of user-defined networks. This is going to hit hard soon.
Better start migrating now (unless someone from AWS wants to chip in?) ...
On Thu, Jun 28, 2018 at 11:23 AM Ryan Parman notifications@github.com
wrote:
The older --link feature is being removed from Docker in favor of
user-defined networks. This is going to hit hard soon.https://docs.docker.com/network/links/
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/aws/amazon-ecs-agent/issues/437#issuecomment-400880788,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABKRo6y0skeOxJ6kImqAU02dK5KPEdMdks5uBDAvgaJpZM4I-UO7
.
Obviously Amazon have a right to pursue whatever commercial strategy they like, but I think you are right about their reasons for ignoring this.
It's a shame. I agree that Weave, EKS, or even the (needlessly complex IMO) Amazon solution might be the right way to deploy at scale. But for those of us who target smaller scale systems or just want to migrate dev or POC system to AWS, there's no question that overlay networks would make it much simpler to port a small scale solution (...that might already be running on some developer's laptop) to AWS.
I too am on the point of abandoning ECS in favor of Kubernetes... when I have the time [yeah, right]
On Jun 21, 2018, at 9:00 PM, L Nehring notifications@github.com wrote:
We use Weave with ECS..... but there was recently an issue with a kernel
patch on Amazon Linux (right about the time 2018-03 was released) where the
kernel patch was broken such that weave would panic the kernel - resulting
in ground-hog day style continuous rebooting.After reverting my instances using snapshots with the older kernel, I had
to wait a couple weeks for Amazon to provide a kernel patch that would work
with Weave. In the mean time, I experimented with docker swarm and was
impressed with how well the overlay networks simply work.....it just didn't
work with ECS. I almost got to the point of experimenting with ECS
instances custom built on Ubuntu linux.It sure seams that Amazon is purposely ignoring supporting overlay networks
in favor of their solution using additional network interfaces on instances
and the stuff associated with that (and the unfortunate issue that
instances are limited to a rather small number of ENIs). It's really
making me want to take a serious look at Kubernetes and see whether EKS is
worth learning or not and move away from ECS issues.On Thu, Jun 21, 2018 at 3:21 PM, Brian Ploetz notifications@github.com
wrote:This issue has been open for two years, and is the second most commented
open issue. The silence from the AWS is deafening. Can we please get some
kind of an update?โ
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/aws/amazon-ecs-agent/issues/437#issuecomment-399247305,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADu6EcXttn2zfQB9zTdApwDOtOKYATeBks5t_A5PgaJpZM4I-UO7
.โ
You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/aws/amazon-ecs-agent/issues/437#issuecomment-399288307, or mute the thread https://github.com/notifications/unsubscribe-auth/AYDy48HssN515RtqTp7zsZ4_ByUKRcvMks5t_EGbgaJpZM4I-UO7.
I've spent a couple weeks now working with Kubernetes. I'm very happy with Azure's ecosystem: using VS Code for all development, ACR (container rep), AKS (k8 service). I use docker-compose for local container testing and kubectl to deploy my kubernetes yaml configuration to Az. It's extremely easy with AKS although documentation could always be better.
I wish I had looked at Amazon's Kubernetes services earlier as I might not have switched providers; thus I can't give you a well-rounded comparison. It looks like Kubernetes is the future though. Azure also had something called Container Services - similar to ECS, but they've rebranded as AKS and advocate strongly for K8 over competing orchestration libraries. I believe AWS is doing the same, but have nothing to back this up officially. (would be nice to see an official roadmap for ECS vs EKS)
Regardless of cloud provider you use, I really recommend spending some time learning K8. There's a slight learning curve but 1-2 weeks can get you running. I have an Akka cluster of containers deployed and conceptually, I could take the same K8 configuration file and apply it to AWS with minor changes of switching container registries. Although I do know AWS has some K8-specific annotations you need to be aware of.
I've spent the last week banging my head against the wall with this and have yet to find a good solution.
1) As previously mentioned, Docker's --link (and by extension, the task definition "link" property) are being deprecated, additionally for it to work this requires two containers to be in the same task definition (so they are placed on the same host and share the same bridge network) so it isn't scalable nor future-proof
2) I've tried using the R53/ECS Service Discovery / Auto naming with no luck - I get the appropriate R53 records created, pointing to the correct container and port, but calling the generated endpoint URL from a container in the same VPC resulted in it not being able to resolve the hostname, as if the container is not aware of the private R53 zone
3) All other solutions I can find involve feel hacky and non-intuitive, including adding third-party discovery services like HashiCorp Consul
With Amazon's apparent decision to not support overlay networks, lack of adequate documentation and resources on using their Service Discovery Service outside setting it up via the console's GUI, and how comparatively simple networking appears to be via Docker Swarm or Kubernetes, ECS is becoming an increasingly unappealing solution.
@wyqydsyq the ECS Service Discovery works pretty well for me (apart from the fact the CloudFormation side of it is still somewhat shaky)... are you sure you do not have some custom DNS settings in the host ec2 instance or the container itself?
The problem with ECS SD, as with EKS is that they are not released to all regions.
Anything new on this?
Here's the doc just for reference:
https://docs.docker.com/network/bridge
Differences between user-defined bridges and the default bridge
User-defined bridges provide better isolation and interoperability between containerized applications.
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. This allows containerized applications to communicate with each other easily, without accidentally opening access to the outside world.
User-defined bridges provide automatic DNS resolution between containers.
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I need this feature too... awsvpc mode, IMO, isn't good enough because the instances ENI attachment limitations.
If someone is interrested, i have found a workaround....
Create a script setup-server.py:
import os
import docker
docker_client = docker.from_env()
networks_client = docker_client.networks
network_name = os.environ['CONTAINER_NETWORK']
network_list = networks_client.list(network_name)
if network_list:
network = network_list[0]
else:
network = networks_client.create(network_name, driver="bridge")
container_id = os.environ['HOSTNAME']
network.connect(container_id)
Create a script run-server.sh:
#!/bin/bash
python3 ./setup-server.py
# Add your usaul entry point there
.....
In dockerfile add scripts and python tools
RUN apt-get update \
&& apt-get --yes install \
python3-pip
RUN pip3 install docker
COPY setup-server.py setup-server.py
RUN chmod +x setup-server.py
COPY runserver.sh runserver.sh
RUN chmod +x runserver.sh
In ECS task:
It works nicely for me. But I hope ECS Team will make something to get user-defined network functionnality.
Hi Everyone,
Thank you for your feedback regarding using Docker's user-defined networks.
You should consider using AWS Fargate
- In other words we have to pay for this feature
We have launched a native experience for ECS service discovery for EC2 mode
- Will this feature allow communication between containers by using container name (possible with user-defined networks on Docker layer), defined under same task definition?
Could you please provide some documentation link for this new service discovery feature, thanks?
The way to think about it is this. It's all about which APIs with which you want to interact.
- From this statement I get an impression that ECS and Kubernetes are basically the same kind of orchestrator, but it's only matter of preference, which is IMHO, absolutely incorrect.
The only reason why Amazon decided to provide K8S as managed service is beacause they realized this is the horse which is going to win the race.
https://youtu.be/shV2sokuF5k
I tought @yunhee-l was talking about some new features regarding service discovery, but obviously I was wrong, I'm very familiar with service discovery (and doc) they have launched some time ago but they're forcing us to use Route53 so that's why I don't what to use it.
This is where the answer to the question, regarding user-defined networks and why they're still "working" on the implementation, is hiding.
my two cents
the problem is a simple one (when using docker on local host) - have two containers in the same task talk to each other
Whatever direction you go you hit major problems with the simple task
the best option is to enable user defined networks (as requested in this issue) but also removing the "outside network communication" limitation for ec2 launch time would be a welcomed addition
+1
+1. I would like to preach the evils of linking containers, and you can't link in fargate which means no sidecars in fargate.
+1. I would like to preach the evils of linking containers, and you can't link in fargate which means no sidecars in fargate.
Can you explain more about the connection between linking and sidecars in Fargate? We definitely support sidecars in Fargate; if you need containers in the same Fargate task to communicate, they can do so via localhost.
I was enlightened this morning about this article:
https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-containers-to-aws-fargate/
localhost/127.0.0.1:<some_port_number> solves my concerns. Thanks for the quick followup.
Looks like the ENI constraints have finally been addressed.
Amazon ECS now supports increased elastic network interface (ENI) limits for tasks in awsvpc Networking Mode
Posted On: Jun 6, 2019
Amazon Elastic Container Service (ECS) now supports increased ECS task limits for select Amazon EC2 instances when using awsvpc task networking mode. When you use these instance types and opt in to the awsvpcTrunking account setting, additional Elastic Network Interfaces (ENIs) are available for tasks using awsvpc networking mode on newly launched container instances.
Previously, the number of tasks in awsvpc network mode that could be run on an instance was limited by the number of available Elastic Network Interfaces (ENIs) on the instance; those ENIs could be used by ECS tasks or by other processes outside of ECS. As a result, the number of tasks that could be placed on EC2 instances often was constrained despite there being ample vCPU and memory available for additional containers to utilize. Now, you have access to an increased number of ENIs for use exclusively by tasks in awsvpc networking mode for select instance types. The increase is anywhere from 3 to 8 times the previous limits, depending on the instance type.
The improved ENI limits are available in all regions where ECS is available. Please visit the AWS region table to see where Amazon ECS is available.
To learn more on how to opt in, see Account Settings. To get started with increased ENI limits, read our documentation.
https://aws.amazon.com/about-aws/whats-new/2019/06/Amazon-ECS-Improves-ENI-Density-Limits-for-awsvpc-Networking-Mode/
Unfortunately, awsvpc (in ECS, not Fargate) has half-baked ENIs that don't allow public addresses.
This necessitates setting up single-point-of-failure/costly NATs just for the privilege of accessing the internet.
I ended up mixing mapped links and mapped ports to work around the lack of bidirectional linking imposed by lack of user defined networks.
Here's the part of the config that does it incase it helps anyone else.
{
"name": "grafana",
"image": "docker.pkg.github.com/safecast/reporting2/grafana:latest",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"links": [
"renderer"
],
"mountPoints": [
{
"sourceVolume": "grafana",
"containerPath": "/etc/grafana",
"readOnly": true
}
]
},
{
"name": "renderer",
"image": "grafana/grafana-image-renderer:2.0.0",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081
}
],
"mountPoints": [],
"extraHosts": [
{
"hostname": "grafana",
"ipAddress": "172.17.0.1"
}
]
}
User defined networks would still be appreciated since it'd make this way simpler.
Still a required feature of ECS. I do understand that we can use awsvpc as an alternative, but for my use case this is not an option, since I have around 3k of services that could scale to something around 10k replicas and I don't have an infinite network mask within this environment to allocate. It would be great to have an option like user defined networks.
Most helpful comment
A major disappointment I have with most (all?) orchestration tools is the assumption that all containers will be mapped to ports on the host. With overlay networks, this is not necessary. Containers can communicate within the network on ports that are not exposed or mapped to the host. This is clearly preferable as it almost completely eliminates any sort of port management and the possibility for port conflicts.
Start your containers in an overlay network, listen on standard ports (i.e.: 80/443) without worrying about conflicts, and setup a proxy to forward requests to your containers by name. Map your proxy to host port 80/443 and point your ELB at it. Manage it all using your service discovery DNS. This is the most elegant and maintainable solution, yet most orchestration tools will not support it. It's a crying shame. Literally, I am crying over it.
I shudder to think about managing 10,000 containers with port mapping. If each container exposes two ports, that's 20,000 ports I have to manage! Oh, I can make them map to random host ports, but now my proxy logic is so much more complicated, and someday I'll simply run out of ports. The bottom line is that a "scalable" solution that's built on port mapping is not scalable -- because mapping ports is not scalable.
I have modified the ECS agent to support this, and it works perfectly for my needs. However, it's less than ideal, because I lose the regular updates to the agent, unless I continually merge them in, and I have little to no visibility or control into the networks from the console or the CLI.
Guys, let's ditch the port mapping nonsense. It's not necessary with overlay networks.