Hi
Sorry to repost my own issue but the BOT moved my issue so now I can't close it. I really need your help please. If I can't get a fix for this problem I will have to resort to Cloudformation which i don't want to do if I can help it. I am under big time constraints.
Terraform v0.10.2
I am expecting the containers to register with the target group on a random port but instead they are registering on random ports as well as the container port. Those registering on the container port obviously fail. Can anyone please see what I have done wrong in the configuration. I have tried creating an ECS cluster manually in the console and the behaviour i see is the containers only registering on random ports as expected. Please can you help? Please find an image below as well as the terraform file
Image
https://www.dropbox.com/s/y5x0t9md28ra36z/alb.tiff?dl=0
Terraform files are at link below
https://www.dropbox.com/sh/c3597cczwcy9cwy/AAB2lC8h3J7w9fvzrr4jK2qCa?dl=0
Is the networking mode of the task definition "bridge" not "host"?
And the port mapping of the container attached to the task definition is like (host:container) 0:<your expecting container port> ?
@ShinobiSlayer How does your task definition looks like? Please always post relevant config snippets (with sensitive info removed, and if you are paranoid, change names, too).
I have this in my task definition, and it is working as expected:
...
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
...
PS: And please close https://github.com/terraform-providers/terraform-provider-aws/issues/1478 if you can.
Hi
My task definition is in these files as above
https://www.dropbox.com/sh/c3597cczwcy9cwy/AAB2lC8h3J7w9fvzrr4jK2qCa?dl=0
resource "aws_ecs_task_definition" "ui-task-definition" {
family = "ui"
container_definitions = <<DEFINITION
[
{
"name": "${var.name}-ui",
"image": "httpd:2.2",
"cpu": 10,
"memory": 512,
"links": [],
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.name}-log-group",
"awslogs-region": "${var.log_group_region}",
"awslogs-stream-prefix": "ui"
}
},
"command": [],
"environment": [],
"mountPoints": [],
"volumesFrom": []
}
]
DEFINITION
}
I think I encountered something like this, but ended up completely destroying the ALB and creating a new one.
Was this created with the random port or with fixed port and then modified to use random?
What you could try is to taint and recreate the target group.
It was created with random port all in one deployment.
Thanks for your help ;0)
Hi
I am still getting the same problem. If you have any ideas they would be greatly appreciated.
Many thanks
David
Hi
I found the problem. It is this single line in the aws_autoscaling_group
target_group_arns = ["${var.ui_target_group_arn}"]
If you remove it. It stops the problem. I came to this conclusion by comparing an AWS built cluster to the cluster my scripts were making. I noticed that the AWS cluster auto scaling group had no reference to the target group. It seems that the aws_ecs_service is the thing that should be managing this and not the aws_autoscaling_group.
Thanks for everyones help
Confirming this is indeed an issue. (nothing to fix, but good to know for future internet searchers)
I ran into it when switching from a classic elb to a ALB, thinking the elb attachment on the autoscaling group would directly translate into a target attachment.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!