Terraform-provider-aws: Double apply required to pick up changes to aws_ecs_task_definition in aws_ecs_service

Created on 5 Dec 2017  ยท  5Comments  ยท  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @abferm as hashicorp/terraform#16818. It was migrated here as a result of the provider split. The original body of the issue is below._


I have several ECS services I'm trying to manage with terraform, and I've come across an issue where I have to run terraform apply twice. Once to update the task definition, and again to update the service to pull the new revision. This seems similar to hashicorp/terraform#11253, but not identical, as the service does update on the second run. I have a portion of my code below along with a hack of a fix. I've also tried using the aws_ecs_task_definition data source, but the problem there is it shows the same revision as the terraform state when planning.

What I don't understand is how null_resource's triggers picks up the change, but aws_ecs_service does not.

Terraform Version

Terraform v0.11.0
+ provider.aws v1.5.0
+ provider.null v1.0.0

Terraform Configuration Files

Trouble Code

resource "aws_ecs_task_definition" "service" {
  family        = "${var.name}"
  task_role_arn = "${var.role_arn}"

  volume = "${var.volumes}"

  container_definitions = <<DEFINITION
[
  {
    "cpu": 128,
    "essential": true,
    "image": "${var.image}:${var.tag}",
    "environment": ${var.env_vars},
    "memory": 128,
    "memoryReservation": 64,
    "name": "${var.name}",
    "portMappings": [
      {
        "containerPort": ${var.container_port}
      }
    ],
    "mountPoints": ${jsonencode(var.mount_points)}
  }
]
DEFINITION
}

resource "aws_ecs_service" "service" {
  name    = "${var.name}"
  cluster = "${var.cluster_name}"

  # FIXME:
  # This currently requires a double apply ... this is because when updating the
  # task, old task is latched when the service is checked for updates
  task_definition = "${aws_ecs_task_definition.service.family}:${aws_ecs_task_definition.service.revision}"

  desired_count = 3

  # Set mimimum healthy percent to 50 to allow for rolling updates
  deployment_minimum_healthy_percent = 50

  load_balancer {
    target_group_arn = "${var.target_group_arn}"
    container_name   = "${var.name}"
    container_port   = "${var.container_port}"
  }

  lifecycle {
    ignore_changes = ["iam_role"]
  }
}

Hackish Fix

Ideally the above code would be enough, but the following is required to update the revision in the service.

resource "null_resource" "task_revision_update" {
  triggers {
    family   = "${aws_ecs_task_definition.service.family}"
    revision = "${aws_ecs_task_definition.service.revision}"
  }

  # if the family or revision are changed, force the service to pick up the new version
  provisioner "local-exec" {
    command = "aws ecs update-service --cluster ${var.cluster_name} --service ${var.name} --task-definition ${aws_ecs_task_definition.service.family}:${aws_ecs_task_definition.service.revision}"
  }

  depends_on = ["aws_ecs_service.service"]
}
bug servicecs stale

Most helpful comment

Using the arn for the task_definition on a aws_ecs_service instead of the family/revision does not resolve this issue. I am currently experiencing this exact same thing on Terraform v0.11.7, provider.aws v1.31.0. A double apply is required to get Terraform to update the task definition version on aws_ecs_service.

All 5 comments

Have you tried with

  task_definition = "${aws_ecs_task_definition.service.arn}"

Also what shows in the events tab for the service?

Pretty sure that the above suggestion would fix this issue, so it's not really a bug.

Using the arn for the task_definition on a aws_ecs_service instead of the family/revision does not resolve this issue. I am currently experiencing this exact same thing on Terraform v0.11.7, provider.aws v1.31.0. A double apply is required to get Terraform to update the task definition version on aws_ecs_service.

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings