I am effectively following these instructions to launch an ECS Container Instance. I am creating an ECS-optimized EC2 instance with the config set to the ECS cluster I want to use and the ECS IAM role:
resource "aws_ecs_cluster" "ingest" {
name = "ingest"
}
resource "aws_iam_role" "ecs_ingest" {
name = "ecs_ingest"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "ecs_ingest" {
name = "ecs_instance_role"
role = "${aws_iam_role.ecs_ingest.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecs:StartTask"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_instance_profile" "ingest" {
name = "ingest_profile"
roles = ["${aws_iam_role.ecs_ingest.name}"]
}
resource "aws_instance" "ingest" {
# ECS-optimized AMI for us-east-1
ami = "ami-33b48a59"
instance_type = "t2.micro"
user_data = <<EOF
#!/bin/bash
echo ECS_CLUSTER=${aws_ecs_cluster.ingest.name} >> /etc/ecs/ecs.config
EOF
iam_instance_profile = "${aws_iam_instance_profile.ingest.name}"
}
This creates the role, the cluster, and the EC2 instance and it assigns the correct role to the EC2 instance as well, but when I view my clusters in the console there are no registered container instances. When I select the cluster and the "ECS Instances" tab it tells me to add additional ECS instances using Auto Scaling or EC2, but the EC2 instance I have created did not register.
Did you check that ECS_CLUSTER is correctly set on the instance? You can also try not specifying the cluster and see if it gets added to the default cluster.
@yissachar not sure exactly how to check that, but when I look at the instance and I look at the "User Data" it is the same as what I put in above which has echo ECS_CLUSTER=ingest >> /etc/ecs/ecs.config so at least that is set properly. I have also tried not setting that at all, but it doesn't get registered for the default cluster either ... but the default cluster doesn't exist so I'm not sure if it's expected to be created automatically or not.
@ajcrites Have you checked if the ECS agent is running via docker ps on the instance where it should be running? If it is not, have you checked docker ps -a to see if there was an attempt for starting the service and if it failed to start and why?
Have you tried pulling down any logs via docker logs <CONTAINER_ID>?
@ajcrites If I'm not mistaken your aws_iam_role. ecs_ingest has the wrong service specified. You have it set as ecs.amazonaws.com when it should be ec2.amazonaws.com.
@yissachar seems like switching ecs.amazonaws to ec2.amazonaws was all I needed to do ... This wasn't a typo I just wasn't aware of the right Service to use. Do you know how you would find that out?
It's documented here.
For me it was using the correct AMI for the region. Found the list here:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
I, on the other hand, forgot to add egress rules in the security group..
I, on the other hand, forgot to add
egressrules in the security group..
Mine too, thanks for the update!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@ajcrites If I'm not mistaken your
aws_iam_role. ecs_ingesthas the wrong service specified. You have it set asecs.amazonaws.comwhen it should beec2.amazonaws.com.