I have the following service and task defined:
resource "aws_ecs_task_definition" "web" {
family = "web"
container_definitions = <<EOF
[{
"name": "web",
"image": "some/registry/web:1",
"memory": 800,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}]
EOF
}
resource "aws_ecs_service" "web" {
name = "web"
task_definition = "${aws_ecs_task_definition.web.arn}"
desired_count = 1
}
I now want to update my service to serve a new version of my container, so I update the tag on my image to 2. I expect that this would update the task and the service at the same time, but in reality a new revision is created for the task but the service is not updated to point at this new revision. In order to get the service to point to the new revision I have to actually change the name of the task each time I update it so that it is treated as a delete and recreate of the task. This will then create a new revision of the task, and the service will point to the new version, kicking off the ECS rolling deploy process.
So in the above example in order to deploy a new version I would do something like this:
resource "aws_ecs_task_definition" "web_2" {
family = "web"
container_definitions = <<EOF
[{
"name": "web",
"image": "some/registry/web:2",
"memory": 800,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}]
EOF
}
resource "aws_ecs_service" "web" {
name = "web"
task_definition = "${aws_ecs_task_definition.web_2.arn}"
desired_count = 1
}
I'm not sure if this is a bug or intended behaviour, but either way it doesn't seem like a good way to handle new task deployments.
Hi,
what version of Terraform do you use? If it's older than latest, have you tried upgrading to latest (0.6.8 atm)?
There has been some bug fixes recently around this, e.g. https://github.com/hashicorp/terraform/pull/3924
Ah excellent, upgrading fixed this. Works great now! Sorry about that - should have made sure I was on latest first.
@radeksimko Hey, I am getting a issue while updating the ecs service using Terraform.
When i am releasing a new version of task through terraform, my service is getting updated with new version of task definition but when i am viewing my service, it is showing the current running task as [INACTIVE] as the service is updated with new task definition but service is not rolling the task to new version of task. wondering why it is behaving like this as when we update a service using UI, it rolls the running task to new version of task one by one.
Any idea why it is behaving like this? main issue in the "kicking off the ECS rolling deploy process" the point that mentioned above ?
I am currently seeing this issue on v0.9.11
When updating the task definition a new revision is created. and the service lists the tasks as [INACTIVE]
I need to run terraform again for the service to pick up the new task definition revision and update the service.
I think this is because the revision number is a computed value and when terraform runs it does not know that the revision has changed until it does, therefore its too late to update the service?
@jonnyshaw89 I have the same problem. "Rolling deployment" via service makes it almost impossible. A workaround is to include the version in the service name so it gets destroyed and recreated. (which causes downtime)
I've also same problem with v 0.10.5. Will there be any improvement about that? I currently write revision number into task definition json file, and update the number every time before deploying new code.
I was able to solve the inactive task definition issue with the example in the ECS task definition data source. You set up the ECS service resource to use the the max revision of either what your Terraform resource has created, or what is in the AWS console which the data source retrieves.
The one downside to this is if someone changes the task definition, Terraform will not realign that to what's defined in code.
@dmikalova I might be doing wrong, but it seems like the example that you suggested only works with existing services/task definitions. You can't use that particular syntax with a new services, since no existing task definition exist (yet).
Just throwing this out there since it was my issue.. Make sure you don't have
lifecycle = {
ignore_changes = ["task_definition"]
}
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@jonnyshaw89 I have the same problem. "Rolling deployment" via service makes it almost impossible. A workaround is to include the version in the service name so it gets destroyed and recreated. (which causes downtime)