Terraform-provider-aws: Migrating to ordered_placement_strategy forces a new resource

Created on 16 May 2018  ยท  11Comments  ยท  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

$ terraform -v
Terraform v0.11.7
+ provider.aws v1.18.0
+ provider.template v1.0.0

Affected Resource(s)

  • aws_ecs_service

Terraform Configuration Files

resource "aws_ecs_service" "service" {
  name = "${var.service_name}"

  cluster         = "${var.cluster_id}"
  task_definition = "${coalesce(var.task_def, var.service_name)}"
  desired_count   = "${var.service_desired_count}"

  iam_role = "${var.service_iam_role_name}"

  deployment_maximum_percent         = "${var.deployment_maximum_percent}"
  deployment_minimum_healthy_percent = "${var.deployment_minimum_healthy_percent}"

  # once setup, the task definition is updated via circleci
  lifecycle {
    ignore_changes = ["task_definition"]
  }

  depends_on = [
    "aws_alb_listener.service",
  ]

  ordered_placement_strategy {
    field = "attribute:ecs.availability-zone"
    type  = "spread"
  }

  ordered_placement_strategy {
    field = "instanceId"
    type  = "spread"
  }

  load_balancer {
    target_group_arn = "${aws_alb_target_group.service.id}"
    container_name   = "${var.service_name}"
    container_port   = "${var.service_port}"
  }
}

Debug Output

Panic Output

Expected Behavior

Changing placement_strategy to ordered_placement_strategy would cause no diff in plan.

Actual Behavior

Every service we have using this module needs to be recreated.

Steps to Reproduce

  1. Create a service with placement_strategy
  2. terraform apply
  3. Modify service by changing placement_strategy to ordered_placement_strategy
  4. terraform plan

Important Factoids

References

bug servicecs

Most helpful comment

Hi @icco ๐Ÿ‘‹ sorry you are running into trouble here and thanks for reporting it. Can you share the terraform plan output? Is it showing something like the following?

ordered_placement_strategy.#: "0" => "1" (forces new resource)
placement_strategy.#:         "1" => "0" (forces new resource)

We might need to customize the difference when converting.

All 11 comments

Hi @icco ๐Ÿ‘‹ sorry you are running into trouble here and thanks for reporting it. Can you share the terraform plan output? Is it showing something like the following?

ordered_placement_strategy.#: "0" => "1" (forces new resource)
placement_strategy.#:         "1" => "0" (forces new resource)

We might need to customize the difference when converting.

I'll get the output when I'm at work tomorrow, but yes, that's what's
happening

An example service that would be affected:

-/+ module.yahtzee.aws_ecs_service.service (new resource required)
      id:                                        "arn:aws:ecs:us-east-1:...:service/yahtzee" => <computed> (forces new resource)
      cluster:                                   "arn:aws:ecs:us-east-1:...:cluster/platform-unified-cluster" => "arn:aws:ecs:us-east-1:...:cluster/platform-unified-cluster"
      deployment_maximum_percent:                "200" => "200"
      deployment_minimum_healthy_percent:        "50" => "50"
      desired_count:                             "1" => "1"
      iam_role:                                  "ecs_service" => "ecs_service"
      launch_type:                               "EC2" => "EC2"
      load_balancer.#:                           "1" => "1"
      load_balancer.214228493.container_name:    "yahtzee" => "yahtzee"
      load_balancer.214228493.container_port:    "8080" => "8080"
      load_balancer.214228493.elb_name:          "" => ""
      load_balancer.214228493.target_group_arn:  "arn:aws:elasticloadbalancing:us-east-1:...:targetgroup/yahtzee/053b1f67a2e1b589" => "arn:aws:elasticloadbalancing:us-east-1:...:targetgroup/yahtzee/053b1f67a2e1b589"
      name:                                      "yahtzee" => "yahtzee"
      ordered_placement_strategy.#:              "" => "2" (forces new resource)
      ordered_placement_strategy.0.field:        "" => "attribute:ecs.availability-zone" (forces new resource)
      ordered_placement_strategy.0.type:         "" => "spread" (forces new resource)
      ordered_placement_strategy.1.field:        "" => "instanceId" (forces new resource)
      ordered_placement_strategy.1.type:         "" => "spread" (forces new resource)
      placement_strategy.#:                      "2" => "0" (forces new resource)
      placement_strategy.2750134989.field:       "instanceId" => "" (forces new resource)
      placement_strategy.2750134989.type:        "spread" => "" (forces new resource)
      placement_strategy.3619322362.field:       "attribute:ecs.availability-zone" => "" (forces new resource)
      placement_strategy.3619322362.type:        "spread" => "" (forces new resource)
      task_definition:                           "yahtzee:26" => "yahtzee"

@bflad what's the possibility of this getting fixed and/or the timeline for placement_strategy going away? I'm worried about us having to pin our version of aws in case it goes away without a fix being in place.

I am having this issue too, when upgrade aws provider and using placement_strategy.
This cause downtime, any update when its going to be fixed?

We also have this issue. Originally I thought it was because the placement order changed, but we kept the order the same and it still forced a new service resource.

Hi Folks ๐Ÿ‘‹ Apologies that this was never addressed properly in the version 1.X releases after the change and before version 2.0.0. ๐Ÿ˜– We are still working on proper documentation for the process of deprecating attributes, which looks like it will require some additional notes surrounding ForceNew attributes.

Luckily, when upgrading your environments to version 2.0.0 (releasing later this week) or later, the aws_ecs_service resource will be always refreshing ordered_placement_strategy in the Terraform state, so you should be able to just change placement_strategy to ordered_placement_strategy in your configurations without issue as part of the upgrade. If all else fails, terraform state rm and terraform import can be used to workaround this problematic change as well.

Again, sorry for the hassle with this deprecation. We can certainly do better here and will continue to try and prevent these sort of upgrade problems in the future for the Terraform AWS Provider and other Terraform Providers as well.

@bflad I just tried this with 2.0.0 and it's still trying to recreate the service. This is a huge problem for us:

      ordered_placement_strategy.#:                           "2" => "2"
      ordered_placement_strategy.0.field:                     "memory" => "memory"
      ordered_placement_strategy.0.type:                      "binpack" => "binpack"
      ordered_placement_strategy.1.field:                     "attribute:ecs.availability-zone" => "attribute:ecs.availability-zone"
      ordered_placement_strategy.1.type:                      "spread" => "spread"
      placement_strategy.#:                                   "2" => "0" (forces new resource)
      placement_strategy.2224589570.field:                    "memory" => "" (forces new resource)
      placement_strategy.2224589570.type:                     "binpack" => "" (forces new resource)
      placement_strategy.3619322362.field:                    "attribute:ecs.availability-zone" => "" (forces new resource)
      placement_strategy.3619322362.type:                     "spread" => "" (forces new resource)

@bflad this is happening for us too in the latest version 2.0.0

@maxblaze @venkata6 I have created https://github.com/terraform-providers/terraform-provider-aws/issues/7787 to triage this before our next release.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings