Hi
I have three containers that are running in ECS. But the website comes up only when we run "docker exec..." command. I can do this by login into the server and running this command. But this shouldn't be used. So my question is how to run "docker exec..." command without logging into the server.
You can give solution in AMAZON ECS console or using ecs-cli or any other which you know.
Since using ecs-cli command, we can make a cluster, tasks etc from our local machine. So how to run docker exec command from local machine into the containers.
+1
+1
can't you add your exec command to the dockerfile?
I'd like to run my rake db migrate task and not sure what is the most elegant way to go about it. It should runs only the first time when creating the cluster to create the database and seed it with test data
+1
+1
@VinceMD docker exec could be a valid use case when you want to update some code(git pull) in the container after image is built. Writing exec in dockerfile would require re-build the image, push the image, restart ECS task...
+1
+1
I would like this functionality. Researching various secrets injection solutions where the container wouldn't have to be modified to include aws tools
I built a tool ecsctl to do this. However, you will need to customize docker daemon configuration on container instances to listen on a port.
@cxmcc what about security? How do you secure tcp port of Docker daemon?
@panga Currently with networking configurations.
externally only open this port to trusted network (vpn/bastion, etc)
internally run a iptables rule to drop traffic going to that port from containers:
iptables --insert INPUT 1 --in-interface docker+ --protocol tcp --destination-port MYDOCKERPORT --jump DROP
Alternatively I believe using a tls cert may be possible, but I have not tried out.
So...,if I am not wrong, right now it is not possible to run commands over running tasks, like "docker exec" way, isn´t it?
@destebanm It's certainly possible to docker exec into a container that's running as part of an ECS task, but you currently need to identify the specific instance and container manually using ECS and Docker tooling and log in to the appropriate instance. This is clearly suboptimal, and we're tracking this as a feature request.
I'd be interested in hearing ideas for how this might work. What would be the ideal workflow around docker exec? Would people prefer that it be integrated into the web console, such that you can identify a task with the UI and get an interactive docker exec environment in the browser? Or would CLI integration be better?
On the ECS Agent, you could marshall the unix socket to a TCP endpoint. This endpoint would need to be authed with a IAM token so that the console that is connecting to the socket is authenticated for a short period of time.
The best way of getting a quick shell would be from within the ECS Console. You could right click on the task itself, then you could open up a sh to the container. Otherwise you're using the cli to list the containers, get statistics, yada yada. It just seems more simple to look at the metrics of a service, then go into a container that way. Docker-cloud did this integration awhile back, but it works great and it's a simple way to get into your container to do a quick ls or curl to a database.
But you could also do a CLI integration that would behave in much of the same way, by connecting your docker cli to that specific socket via an AWS API. I might be missing something so please fill in the gaps if you have ideas!
I just want to say that I would love to be able to exec into a running ecs container from my Macbook terminal.
It would make debugging so much quicker and easier.
+1
I am working in a development ECS cluster with EC2 instances that another developer built using his own key pair. Therefore I can't ssh into the instance to run 'docker exec...'. It would be great if something was made available to do this.
@nmeyerhans Not sure if there's a better issue/repo to discuss being able to exec into an ECS task container but this seems to be the best I can find for now.
I was considering spending some time writing an ECS executor for Gitlab Runner that would allow people to run CI jobs as one off ECS tasks but the Gitlab Runner model for both Docker and Kubernetes is to run a container and then exec into it so it can receive the output easily.
I was thinking about seeing if I could hack something together where it overrides the command each time with the script lines concatenated together and then try to read the logs out of Cloudwatch log but it's horribly ugly and the delay on fetching the logs is probably going to be impractical, let alone not being able to support things like after_script (although that's less needed for my use cases right now).
If being able to exec into an ECS task container was possible then I think an ECS executor for Gitlab should be easy enough to write and would be a real benefit for my company. Coupled with Fargate that would be a really, really interesting way of running our CI workloads. That said, I'm also considering just waiting for EKS access and then moving to Kubernetes executors as that's the least work to get this off the Docker-Machine runners I'm using. I expect that will probably be the thing that moves me from ECS to Kubernetes for production services as well although I do prefer the relative simplicity of ECS to k8s.
@nmeyerhans when you say:
It's certainly possible to docker exec into a container that's running as part of an ECS task, but you currently need to identify the specific instance and container manually using ECS and Docker tooling and log in to the appropriate instance.
Can you explain how to do that? That would suffice for me as a workaround...
@harlantwood just ssh into your ECS instance and run docker exec..
ssh ec2-user@my-ecs-server
docker ps
docker exec -it 34cfe4c6b6d5 sh
That works perfectly when doing ECS/EC2
How about when doing ECS/Fargate? Is it possible?
With Fargate you don't have access to the host machine at all
+1
+1 for Fargate
+1 for Fargate
+1 for Fargate
+1 for Fargate
+1 for Fargate
+1
+1 for Fargate
+1 for Fargate
For Fargate, has anyone had luck opening ssh access to the container? Yes I do believe that would require an image with sshd2 and a known key (not ideal!), and opening port 22.
+1 for a Fargate solution. Can't open port 22 and allow ssh. (company policy)
For AWC ECS using EC2 cluster, we can access container by doing SSH on EC2, But how can I access the container in Fargate mode?
+1 for Fargate
+1 for a Fargate ssh access!
+1 for Fargate
++Fargate
+1 on Fargate i can't believe this feature is missing from the get-go. 😐
For Fargate, has anyone had luck opening ssh access to the container? Yes I do believe that would require an image with sshd2 and a known key (not ideal!), and opening port 22.
@enthal I have been able to do this in Fargate. The process is the same as with opening any other TCP port (Dockerfile, container settings, and security group).
@ JamesRyanATX great. How did you manage keys in practice? Making it possible is not the same as making it secure (without making it cumbersome). Did you do anything other than bake the private key into the docker image? Thanks! :)
@enthal you just want to SSH into the container, right? If so, then your Docker image only needs the public key. Your private key is used in the handshake as normal.
+10 for Fargate though I dont even use Fargate
I would be awesome if we could do this from the SDK:
const instances = await ecs.listContainerInstances({ cluster }).promise();
const arn = instances.data.containerInstanceArns[0];
const { stdout, stderr } = await ecs.exec(arn, '/bin/ps', ['aux']).promise();
Something like that...
I think there's a general need to be able to run a command against all the running tasks in a service. I think it would be ideal to extend the config service to be able to do this. There's times when all I want is for the running service task to refresh a configuration, for example, The most efficient way to do this now, and which is not reliable in my opinion, and totally overkill, is to update the service. I say not reliable because updating the service does NOT absolutely replace all running tasks. I've consistently gotten flaky results with this, to the point where I don't even bother with it anymore. I will first kill the tasks, by hand, then update the service. Yeah, that needs to be fixed, too, such that we can confirm how long a task has been running would be ideal.
+1 for Fargate
Bumping up into this issue as well. +1 for a good solution
+1
+1
+1
+1
Its will quite useful to SSH into AWS ECS Fargate container. As we need to run DB commands manually instead of adding the command to Dockerfile.
The only reason we are using ec2 container instances is we need to ssh into container instance and run docker command.
+1 for ssh in Fargate.
+1 for ssh into Fargate. As a company running Rails there is a need for running an interactive rails console.
+1 for ssh in Fargate.
This isn't a 'nice to have' this is a necessity
+1... What everyone else said. Specifically the use case for running DB commands
ok so AWS re:Invent came and went and i find it heartbreaking that Amazon isn't listening to us mere mortals.
Amazon is pushing everyone to put everything on AWS lambda where it's painfully obvious that it's not a replacement for anything Docker/Kubernetes-related...and then having ECS/Fargate not in parity with ECS/EC2 solutions when it comes to obvious features, like having more than 4 vCPUs per docker container and then this ability to SSH into the container with ECS/Fargate. hey, never mind that using Fargate is _FAR MORE EXPENSIVE_ than equivalent EC2-based hosts...but nobody wants to talk about pricing AMIRITE?
as to a solution, i need to be able to SSH into some kind of "virtual host/server" thats in an ECS cluster (the "Fargate host" for the lack of a better term) where i can also apply the Security Group or "assign-to-a-subnet-of-my choice" magic to protect it (think Elastic Load Balancer security) and that i can do any docker command...i can do a docker ps and it will list all the docker processes/containers that's running on the Fargate cluster. so the Fargate is just "one big docker host/server" in this case that has some limited/sandboxed SSH access where you can only run docker commands.
The lack of this feature for Fargate seems to be blocking Fargate for EKS as well.
That is, there is a project called virtual-kubelet that works as an adapter between Kubernetes and Fargate. But only the cloudprovider that supports kubectl-exec(what's requested in this feature request) for serverless containers(like Fargate) is ACI/Azure as of today.
https://github.com/virtual-kubelet/virtual-kubelet/issues/106
Probably an addition to the AWS ECS API to allow us starting interactive sessions to containers running on either EC2-based/Fargate-based containers would help us all.
I don't fully understand this thread. The whole point of ECS/FARGATE is immutable deploy with unmanaged infrastructure. Having SSH on Fargate would be the worst feature Amazon could build. We might as well just go back to a simple EC2 with bash scripts for that.
As for docker exec, you're not suppose to interact with your container. You can define the CMD / Entrypoint of your container and it will be executed on start-up, but other than that Fargate containers should be fully closed.
That's an amazing feature that my company uses to explain to big enterprises how nobody can manipulate the software we deploy. Not even the owner of the software.
I personally agree that usually it should be disabled for production environments.
But I'd still love to see a kind of docker-exec via AWS ECS API for ease of debugging things running inside containers in pre-production environments.
I’ve changed my view after using fargate for the last few months. I would not want ssh access for security reasons. Code execution should be done using a separate task with different builds using different ci/cd deployments. You can tag your ecr repo accordingly.
the reason ssh access on fargate is only to be able to “ssh into” (spawn shell using docker exec) the container itself...so you can check something like if your container can connect properly to elasticache by using nc.
sure this can be done without docker exec and then just build multiple images to “troubleshoot” but isn’t that tedious?
lack of ssh for “security reasons”? you’re implying ecs with ec2 is less secure? that’s a load of baloney. you can secure your ssh access by limiting via network ACL, SG, using a bastion..among other things. having ssh access and securing ssh access are two different things.
Are you implying that monitoring / error handling tools are useless because you can just ssh + nc? Of course you're not implying that, but the same way you think my argument is 'baloney' on the grounds of ACL and SG, I think your argument is 'baloney' because you can use CloudWatch.
I'm not arguing that you cannot build safe containers with ECS EC2, but I am arguing that safety is not a concern for me anymore. It's like a door and a wall: You can make sure your door is properly secured and be responsible for that. In fact, your door can be as secure as a wall. It doesn't mean a wall is safer, it just means with a wall nobody need to think about safety.
If your container cannot connect to elasticache, check cloudwatch.
@deleugpn i did not imply that monitoring tools are useless (and find that a bit overreaching), and that is not the focus of the argument. i stated a use case (checking connectivity to elasticache) for ssh into a container, perhaps a bad one.
ECS is built over Docker technology, and Docker allows you spawn an interactive shell with your container, and ECS Fargate does not allow you to do that and i question that lack of a feature and AWS can say it's for security reasons, but i highly doubt that.
I'm sorry, your analogy by using doors/walls doesn't resonate with me here; going by your analogy door and wall are the same thing. "it just means with a wall nobody need to think about safety." if somebody want's to get in a room and all you have is walls, you can bet that person will start testing how strong your wall is.
I'm using FARGATE with JVM running inside.
I see OutOfMemory error, usually i create heap dump and analyze it with MAT.
Now, if i don't ssh access to the FARGATE instance, how can i extract heap dump?
The only thing i see here, is attach volume to the docker and store heap dump there
@dalegaspi I agree with the sentiment but there is a good case to be made for security concerns in the functionality. It could have anything to do from how AWS is managing the networking of containers on the backend (you aren't getting dedis) to if I get into one Fargate Container is it possible I could stumble on a way to pivot into someone else's. It's managed compute, so you don't get to do everything you want, and yes, that's usually for security reasons.
I just don't think that should be a showstopper on providing the functionality. I think there could be ways to ensure isolation on the backend and expose an API that directs your request to a specific container ID. Even something like a "managed agent" similar to SSM where I can pass a command and then query the output (no interactive shells though).
An issue I had just not that would have made this helpful is I add an additional logging handler to get my logs into ES, and I didn't notice that this disabled the awslogs driver somehow. The ES logs stopped shipping for a separate reason and I couldn't see the logs in CloudWatch either (realized I had to manually add a StreamHandler after my other custom handler to get this to work as intended).
+1 to basically have been able to run a "ps" inside the container to know my application was actually alive first and a "cat" around a few other places.
For running migrations and other "one time commands" before production can be easily ran using CodeDeploy or CodeBuild, as a matter of fact some of them can even be stored in a Lambda function or in the cmd of the dockerfile.
For interactive shell, that's another pair of hands, I think I will keep some small instances inside the VPC I'm interested in. In some projects I have some tasks running in EC2 and I use those for interactive shells, but for the ones that are fully using Fargate having a separate EC2 instance only for scripting seems a little bit cumbersome. Now having used ECS for a few months it seems very strange to not have that option, something using the awscli would be amazing and a much desirable. Maybe adding a flag nearby public ip in configuration to be able to run exec commands.
+1 for Fargate. db migrate...
You can setup a Task Definition with the CMD as your db migrate command and then simply start the Task. It will start a container, run your command and shutdown. No need for Docker exec for that.
My devs currently SSH into infrastructure and do things in our test environments (technically session manager). I've already had a false start trying to force them off shell access. I want the scalability advantages fargate provides but for certain workloads the inability to log in and tweak things is a huge time sink.
Fargate is not lambda, Containers are long running, they are stateful during their lifetime (and thus get into weird states). Startup devs lean heavily on tools like Rails console. Shell access is less secure, but there are well established ways of working with it and making it more secure.
I'm pretty sure Amazon is working on this... You guys who spin missing features as by design always get disappointed in the end.
Instances are cattle, not sheep. The process your tasks and service are meant to run either run, or do not, at which point ECS simply replaces the container. If you need another process to run, you just make a new task definition with a different command or entry point and you’re done. If you’ve set up the cluster and services correctly, there’s no good reason to have to ssh in to the instance, let alone the container. Anything you’d need to do that for would have been addressed during development. That’s the philosophy, anyway, and I’d bet heavily against AWS doing anything to change this. If you really insist on ssh access, then use EC2 clusters instead of Fargate.
On Mar 5, 2019, at 12:38 PM, benjamin-cribb notifications@github.com wrote:
My devs currently SSH into infrastructure and do things in our test environments (technically session manager). I've already had a false start trying to force them off shell access. I want the scalability advantages fargate provides but for certain workloads the inability to log in and tweak things is a huge time sink.
Fargate is not lambda, Containers are long running, they are stateful during their lifetime (and thus get into weird states). Startup devs lean heavily on tools like Rails console. Shell access is less secure, but there are well established ways of working with it and making it more secure.
I'm pretty sure Amazon is working on this... You guys who spin missing features as by design always get disappointed in the end.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I get it. It's still a waste of time to maintain my own library of things to run on my instances when the rails console already provides my devs with a comprehensive library of things they want to run. In some cases this makes it not worth the extra security.
Most of use cases can be fulfilled with one-off containers. Just set a task to start, run desire command (migrate dbs, load data,...) and dies.
The problematic use is to run one interactive bash container to run some interactive console commands (i.e queries). In other platforms as heroku is already solved with the sdk. Anybody knows how to start an interactive container from the aws cli?
@jlmadurga terrible practice, but you ssh into your ec2 part of your ecs cluster, and run your Docker from there
@FernandoMiguel I am planning to use Fargate that's why I want to run interactive container for some rare app shell tasks. Fargate do not handle infrastructure.
I do not want to set ssh to my images.
Heroku has it.
https://devcenter.heroku.com/articles/one-off-dynos#connecting-to-a-production-dyno-via-ssh
Session manager kind of obviates a lot of the security concerns presented here. A session-manager-like docker exec experience would be a troubleshooting dream, plus it's auditable using cloudtrail, uses IAM, logs sessions to cloudwatch... (PSA - if you're still using SSH and not session manager for accessing EC2 instances, you're missing out on the best new feature from AWS in a long time!)
I don't fully understand this thread. The whole point of ECS/FARGATE is immutable deploy with unmanaged infrastructure.
Yes, we all understand that! The question is... I don't want to change anything and I want those instances immutable, the question is how I should run commands (ex: artisan from laravel)!?
We didn't need fargate/ec2/docker to make software deployments immutable. Deploying to places with such little access that they're untroubleshootable has been possible even with conventional vms and deployments. It was just never implemented that way because it doesn't make any sense at all.
For those with bastion hosts running ecs on container instances
https://gist.github.com/softprops/3711c9fe54da673b1ebb53610aab4171
@softprops consider eliminating ssh in favor of session manager (and the session manager aws cli plugin):
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html
Yep. That's next on my list. I just recently discovered that. Good stuff.
session manager can't run a command at session connection. Is there a work around for that? I have other use cases I wanted to do this but couldn't.
eg
ssh this works
ssh $HOST uptime
but in session manager it gives an error
aws ssm start-session --target $HOST uptime
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: uptime
Getting kind of off topic, but I think you're looking for systems manager documents. You define documents that run plugins. These documents can be applied to a list of instances, and the output can be retrieved. For some simple examples that use the predefined AWS-RunShellScript document, see https://docs.aws.amazon.com/systems-manager/latest/userguide/walkthrough-cli.html
If only these could be run against ECS (especially Fargate) services 😉
+1
I've managed to successfully use SSM to get shell access to containers in our Fargate based ECS environment.
In our case, we just needed a way to run a Django Shell on a container which is running in production, so we created a specific service definition which starts a container that includes Django and the amazon-ssm-agent. We can then start a SSM Session via the AWS console to access that container and run commands.
We did the following to get this working:
amazon-ssm-agent -register -code "activation-code" -id "activation-id" -region "region"
amazon-ssm-agent
Where the activation code and ID come from the file. The SSM agent runs as the core process of the container, though technically it could run in the background if other processes need to be run.
It's a bit of a hack, but it works!
I'm also interested in this feature in order to be able to create a JVM thread dump on a running Fargate task. There does not seem to be a way to do that right now.
Yes you can catch things in dev and you can build and push additional containers to run one-off commands. The fact remains that even with those capabilities debugging anything on Fargate takes me 20x as long as a standard container that I can ssh into and view things. Production is not dev, maybe some people are lucky enough to have an exact replica of production to develop on, but I doubt that is true for the majority of people. +1 fargate ssh or something similar.
I have watched this issue for Fargate.
It is very inconvenient to debug problems that occur during the development process on Fargate.
@alex-mcleod 's workaround may be nice, but I hope that individual users will not be forced to hack just to connect to the container.
I would also be interested in such a solution. Not that I would like to use it myself, but there are people who still require occasional shell debug access to prod. I know I know it's not cool, but that doesn't change the fact. Anyone came up with a solution that doesn't create a lot of stale resources or a way to clean them up nicely?
+1
+100
I've been sshing into Fargate containers, I have a entrypoint script that if it is running in a non-prod environment install ssh and add the key. Then allow port 22 on the container and you can ssh directly to root. I also have this setup inside a VPC. I only use it for troubleshooting in dev or staging if we have something unexpected happen, never on prod.
I've been sshing into Fargate containers, I have a entrypoint script that if it is running in a non-prod environment install ssh and add the key. Then allow port 22 on the container and you can ssh directly to root. I also have this setup inside a VPC. I only use it for troubleshooting in dev or staging if we have something unexpected happen, never on prod.
@jz-wilson Any chance you could share a snippet of your entrypoint script?
I copy my key into the image on my Dockerfile:
### Allow SSH Access for debugging ###
#? If ENV is qa this ssh key will be used to gain access container
COPY .docker/debug_key.pub /root/.ssh/
This is what I use in the entrypoint:
if [[ ${ENV,,} != 'prod' ]]; then
echo "Enabling Debugging..."
apt-get update >/dev/null && \
apt-get install -y vim openssh-server vim >/dev/null && \
mkdir -p /var/run/sshd /root/.ssh && \
cat /root/.ssh/debug_key.pub >> /root/.ssh/authorized_keys && \
sed -i 's/prohibit-password/yes/' /etc/ssh/sshd_config && \
chown -R root:root /root/.ssh;chmod -R 700 /root/.ssh && \
echo "StrictHostKeyChecking=no" >> /etc/ssh/ssh_config
fi
Then for your ECS container you want to add the Environment Variable ENV. It should then allow you to ssh into the container as long as you have the port open.
For SSH access maybe is better use AWS System Manager.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ec2-run-command.html
@dtelaroli I think that is for going into unmanaged ECS hosts. My example was for Fargate. I should have clarified that.
+1
+1
I've managed to successfully use SSM to get shell access to containers in our Fargate based ECS environment.
From what I gather (someone correct me if I'm wrong), when one tries to connect to a container using SSM Session Manager, it prompts to force you into the advanced tier pricing, which effectively charges you about $5 a month per running container for the ability to treat it as a "managed on-premise instance" and thus have SSM session manager be able to connect to it.
https://aws.amazon.com/systems-manager/pricing/#Session_Manager
Maybe it will be useful for someone.
I had a request from the dev team to make possible to ssh into the containers that running in Fargate service (containers didn't have sshd) so they can see the application logs and change some config files. I managed to find a temporary solution by setting up an additional container with sshd in the same Fargate service and mount the volumes with logs and config files from the main container. So we can ssh into the additional container and see the logs and make changes to the main container.
@kutzhanov That sounds like the right solution here. Could you be kind enough to share your container definition for "main" container and sshd container? Thank you
@kutzhanov This is definitely a hacky but working solution. It looks like at this point, we're not going to be able to run docker exec -it <containerID> /bin/sh on fargate.
@nathanpeck this is something we had a quick chat about on twitter. I understand that the entire premise of the fargate service is to be hands off completely. But there will always be non-ideal situations where looking AT the code and running one off tasks becomes critical. If I need to run a container with a command everytime I want this taken care of, it becomes more than cumbersome.
For me, my current poison is rails. Being able to get into a running container and go RAILS_ENV=production bundle exec rails c becomes a very very important ask.
It wouldn’t be necessary to have full session support, but if the AWS API would support single command execution inside containers it would really help doing diagnostic/troubleshooting, it would be great, and it would make it easy to schedule executions of maintenance commands. This is specially true for Fargate. The biggest roadblock to Fargate adoption is the ability to mount volumes over the various file sharing protocols, but this is a distant second issue.
@kutzhanov That sounds like the right solution here. Could you be kind enough to share your container definition for "main" container and sshd container? Thank you
@deuscapturus https://gist.github.com/kutzhanov/1169c77ca112dbedee624bcde21ae6d0
This ought to be possible without installing anything extra in the container, like an SSH server or AWS SSM stuff.
kubectl exec does exactly this for Kubernetees, and it no way allows access to the host or other containers.
I copy my key into the image on my Dockerfile:
### Allow SSH Access for debugging ### #? If ENV is qa this ssh key will be used to gain access container COPY .docker/debug_key.pub /root/.ssh/This is what I use in the entrypoint:
if [[ ${ENV,,} != 'prod' ]]; then echo "Enabling Debugging..." apt-get update >/dev/null && \ apt-get install -y vim openssh-server vim >/dev/null && \ mkdir -p /var/run/sshd /root/.ssh && \ cat /root/.ssh/debug_key.pub >> /root/.ssh/authorized_keys && \ sed -i 's/prohibit-password/yes/' /etc/ssh/sshd_config && \ chown -R root:root /root/.ssh;chmod -R 700 /root/.ssh && \ echo "StrictHostKeyChecking=no" >> /etc/ssh/ssh_config fiThen for your ECS container you want to add the Environment Variable
ENV. It should then allow you to ssh into the container as long as you have the port open.
Thanks for sharing, @jz-wilson! I've used a similar approach but loading the public key from __AWS Parameter Store__ instead of a local file. Sample code is available here just in case anyone wants to see how it works.
I've managed to successfully use SSM to get shell access to containers in our Fargate based ECS environment.
It's a bit of a hack, but it works!
OK, I managed to get my container to register to SSM, but it won't initiate a session. When I select the container and click "Start Session" it takes me to the Sessions panel, with nothing there, no history, nothing.
Logs indicate these entries:
2020-03-27 12:28:452020-03-27 17:28:45 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated
2020-03-27 17:28:45 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated
2020-03-27 12:28:152020-03-27 17:28:15 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.
2020-03-27 12:28:072020-03-27 17:28:04 INFO [MessageGatewayService] [EngineProcessor] Initial processing
2020-03-27 12:28:072020-03-27 17:28:04 INFO [MessageGatewayService] Starting receiving message from control channel
2020-03-27 12:28:072020-03-27 17:28:04 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/mi-0f248b0367ce65dd3?role=subscribe&stream=input
2020-03-27 12:28:062020-03-27 17:28:04 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/mi-0f248b0367ce65dd3?role=subscribe&stream=input
2020-03-27 12:28:062020-03-27 17:28:04 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
2020-03-27 12:28:062020-03-27 17:28:04 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
2020-03-27 12:28:062020-03-27 17:28:04 INFO [LongRunningPluginsManager] starting long running plugin manager
2020-03-27 12:28:062020-03-27 17:28:04 INFO [MessageGatewayService] listening reply.
2020-03-27 12:28:062020-03-27 17:28:04 INFO [OfflineService] Starting send replies to MDS
2020-03-27 12:28:062020-03-27 17:28:04 INFO [OfflineService] Starting message polling
2020-03-27 12:28:062020-03-27 17:28:04 INFO [OfflineService] [EngineProcessor] Initial processing
2020-03-27 12:28:062020-03-27 17:28:04 INFO [OfflineService] [EngineProcessor] Starting
2020-03-27 12:28:062020-03-27 17:28:04 INFO [OfflineService] Starting document processing engine...
2020-03-27 12:28:052020-03-27 17:28:04 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: mi-<id>, requestId: <request>
2020-03-27 12:28:052020-03-27 17:28:04 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module.
2020-03-27 12:28:052020-03-27 17:28:04 INFO [MessageGatewayService] [EngineProcessor] Starting
2020-03-27 12:28:052020-03-27 17:28:04 INFO [MessageGatewayService] Starting session document processing engine...
What did I miss - at least I'm this far, and more importantly, the container is also running my application (something I struggled with getting to work using the "SSH" method
Got it working (was a port thing / permissions thing it seems). To @alex-mcleod 's thoughts on the billing for managed instances, I put together this simple script to scan for MIs that are disconnected (only using managed instances for containers - if the container is disconnected, it has been removed by ECS). This way I can run it in a cron to keep from paying for managing "instances" that I can never access again.
https://github.com/TechnoRoss/fargate-ssm-remove/tree/master
I'm sure it could be more robust & elegant, but I try to keep my scripts dirt-simple. Worst thing that can happen is we remove a running container from the MI list, meaning we'll never be able to login to that container again, and I can live with that, since replacing them is the whole point of simplicity in the container world.
+1 for fargate docker exec command to look inside the container
We're running Java on Fargate and would like the ability to create a heap dump in Production whenever necessary.
+1 for AWS API support for docker exec
+1 - We are investigating the use of Fargate for our production stack, and this would be be part of the deciding factor
+1
To build on @alex-mcleod 's solution a little bit for our own needs, I put together this script that is included in our standard Rails app server Docker image. The image itself is built with both aws-cli and amazon-ssm-agent installed. We then keep a persistent Fargate service running based on the built image that invokes this script on startup for interactive sessions (mainly for running rails console). Since we only ever have one interactive container running per environment, when the script is run, it deregisters any previous instances and deletes the previous activation corresponding to that environment (relying on the ENVIRONMENT envvar to be set identifying the environment) before creating a new activation and registering the instance to that activation. This avoids any necessary manual cleanup of managed instances.
Then we have this policy added to our ECS task role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ssm:AddTagsToResource",
"ssm:CreateActivation",
"ssm:DescribeActivations",
"ssm:DescribeInstanceInformation",
"ssm:DeleteActivation",
"ssm:DeregisterManagedInstance"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::1234567890:role/SSMServiceRole",
"Effect": "Allow"
}
]
}
Finally, when a developer wants to connect to an instance to run an interactive command, we have another script that reads in the desired environment to connect to and uses the same pattern of aws ssm describe-activations to filter for the activation and aws ssm describe-instance-information to filter for the instance-id, and then connect to it with aws ssm start-session --target $INSTANCE_ID. You also need to make sure that you switch to the advanced-instances tier for Systems Manager to be able to connect to these Fargate managed instances.
This meets all of our needs until official support is added for some kind of interactive session with Fargate containers. Hope this is useful to someone else!
Based on the discussion here we added sshd to our containers that can take a flag SSH_ENABLED so sshd is default off.
Also, added a specific One-Off Fargate cluster/task giving us on-demand SSH.
# Dockerfile
FROM alpine:latest
RUN apk update && apk add --virtual --no-cache \
openssh
COPY sshd_config /etc/ssh/sshd_config
RUN mkdir -p /root/.ssh/
COPY authorized-keys/*.pub /root/.ssh/authorized_keys
RUN cat /root/.ssh/authorized-keys/*.pub > /root/.ssh/authorized_keys
RUN chown -R root:root /root/.ssh && chmod -R 600 /root/.ssh
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN ln -s /usr/local/bin/docker-entrypoint.sh /
# We have to set a password to be let in for root - MAKE THIS STRONG.
RUN echo 'root:THEPASSWORDYOUCREATED' | chpasswd
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
# docker-entrypoint.sh
#!/bin/sh
if [ "$SSH_ENABLED" = true ]; then
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
/usr/sbin/sshd
env | grep '_\|PATH' | awk '{print "export " $0}' >> /root/.profile
fi
exec "$@"
More details: https://github.com/jenfi-eng/sshd-docker
We are using something different atm, we run the real services on Fargate with docker, and then have a single micro EC2 instance running same docker image, but not serving any traffic. With this we can use Session Manager to connect to EC2 instance and run docker exec from within it. This simplifies the process, as we don't have to create a separate docker image with sshd, or have a sshd running within container right away.
I did something similar: wrote a small script that lists available task-definitions, user selects one, it then pulls that task-defn down (referred to in the task-defn), and then uses ecs-cli local create to run up that container locally using docker-compose, which pulls from ECR. Relevant excerpt of that script below:
DC_YAML="docker-compose.ecs-local.yml"
DC_OV_YAML="docker-compose.ecs-local.override.yml"
# Create local compose file from live task definition
ecs-cli local create --task-def-remote $TASK_DEFINITION --output $DC_YAML --force
eval $(aws ecr get-login --no-include-email)
ecs-cli local up
The BEST BIT about this is it sets all the environment variables and injects secrets from Secrets Manager for you.
+1 for easy SSM Session Manager setup into ECS (without image creation, maybe with a side-car/multi-container approach ?)
https://www.docker.com/blog/from-docker-straight-to-aws/
Haven't tried it yet, maybe we will be able to exec with this new CLI.
Spent some time looking into this today, since as mentioned it's a feature commonly found as bundle exec (Rails context), docker exec, consul exec, kubectl exec, cf run-task (Cloud Foundry), triton-docker exec (Joyent Cloud).
ufo tool for Rails has provided this task-in-cluster-context functionality on AWS since at least 2017.
I collected some related attempts below. A common downside to many of the examples is that in the case of failed task, manual intervention is required.
For now, I like the solution in run-fargate-task package for Pulumi. It uses the AWS SDK to wait for taskStopped event, and handles many error cases. It's probably the closest solution to ufo task and could be factored out for use with any platform.
Related:
convox provides one-off commands that let you launch a new container or exec into an existing one to run commands if you were looking for any other prior art:
@sramabad1 @SaloniSonpal do you already have a timeline for this? We have a project that we'd like to migrate to ECS, however having the ability to execute one-off commands is something that the team is currently depending on. Thanks!
Most helpful comment
That works perfectly when doing ECS/EC2
How about when doing ECS/Fargate? Is it possible?
With Fargate you don't have access to the host machine at all