Terraform-provider-aws: aws_appautoscaling_target update detaches any aws_appautoscaling_policy's

Created on 13 Jun 2017  ·  16Comments  ·  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @charlesbjohnson as hashicorp/terraform#8484. It was migrated here as part of the provider split. The original body of the issue is below._


Hi there,

Updating an aws_appautoscaling_target (which always forces a new resource) does not trigger dependent aws_appautoscaling_policys to be recreated. As a result, updating min_capacity or max_capacity causes all dependent aws_appautoscaling_policys to be deleted/detached, requiring a subsequent terraform apply to reattach them.

Terraform Version

v0.7.0+

Affected Resource(s)

  • aws_appautoscaling_target
  • aws_appautoscaling_policy

    Terraform Configuration Files

provider "aws" {
  region = "us-west-2"
}

resource "aws_ecs_cluster" "cluster" {
  name = "demo-85e6a168597c3fe593b335df4c11496afe5dea31"
}

resource "aws_ecs_service" "service" {
  name = "${aws_ecs_cluster.cluster.name}"
  cluster = "${aws_ecs_cluster.cluster.id}"
  task_definition = "${aws_ecs_task_definition.task_definition.arn}"
  desired_count = 1
}

resource "aws_ecs_task_definition" "task_definition" {
  family = "nginx"
  container_definitions = <<EOF
[
  {
    "name": "nginx",
    "image": "nginx:latest",
    "cpu": 10,
    "memory": 500,
    "essential": true
  }
]
EOF
}

resource "aws_appautoscaling_target" "target" {
  name = "${aws_ecs_cluster.cluster.name}"
  service_namespace = "ecs"
  resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
  scalable_dimension = "ecs:service:DesiredCount"
  role_arn = "arn:aws:iam::428324370204:role/ecsAutoscaleRole"
  min_capacity = 1
  max_capacity = 2
}

resource "aws_appautoscaling_policy" "policy" {
  name = "${aws_appautoscaling_target.target.name}"
  resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
  adjustment_type = "ChangeInCapacity"
  cooldown = 300
  metric_aggregation_type = "Average"
  step_adjustment {
    metric_interval_lower_bound = 0
    scaling_adjustment = 1
  }
}

Expected Behavior

The aws_appautoscaling_target would be updated and the aws_appautoscaling_policy would still be attached.

Actual Behavior

The aws_appautoscaling_target was deleted and re-created, but the aws_appautoscaling_policy was lost in the process. A subsequent terraform apply will add it back, but it either should not have been removed or it should also have been recreated along with the aws_appautoscaling_target.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. change aws_appautoscaling_target.target.max_capacity to 3
  3. terraform apply (causes the policy detachment)
  4. terraform apply (reattaches the policy)

Important Factoids

According to the AWS docs, the aws_appautoscaling_target can be created as well as updated via RegisterScalableTarget. Perhaps this could be used instead of recreating the aws_appautoscaling_target on update.

bug servicapplicationautoscaling

Most helpful comment

A small workaround for this that appears to be working correctly for my situation is just leveraging the target-id in the name arg for the policy:

name = "${aws_appautoscaling_target.xxxxx.id}-scale-down"

This seems to force a destroy and rebuild.

All 16 comments

Still an issue with terraform 0.9.8.
A lot of people are having this issue.

I have same problem. I originally posted my case in issue terraform#8099.

I post it here again, hope it can help you guys to reproduce the problem.

resource "aws_appautoscaling_target" "ecs_target" {
  max_capacity       = "${var.max_capacity}"
  min_capacity       = "${var.min_capacity}"
  role_arn           = "${var.global_vars["ecs_as_arn"]}"

  resource_id        = "service/${var.global_vars["ecs_cluster_name"]}/${var.ecs_service_name}"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"
}

resource "aws_appautoscaling_policy" "ecs_cpu_scale_in" {
  adjustment_type         = "${var.adjustment_type}"
  cooldown                = "${var.cooldown}"
  metric_aggregation_type = "${var.metric_aggregation_type}"

  name                    = "${var.global_vars["ecs_cluster_name"]}-${var.ecs_service_name}-cpu-scale-in"
  resource_id             = "service/${var.global_vars["ecs_cluster_name"]}/${var.ecs_service_name}"
  scalable_dimension      = "ecs:service:DesiredCount"
  service_namespace       = "ecs"

  step_adjustment {
    metric_interval_upper_bound = "${var.scale_in_cpu_upper_bound}"
    scaling_adjustment          = "${var.scale_in_adjustment}"
  }

  depends_on = ["aws_appautoscaling_target.ecs_target"]
}

resource aws_appautoscaling_policy.ecs_cpu_scale_in (let it be autoscaling policy) depends on resource aws_appautoscaling_target.ecs_target(let it be autoscaling target).

When I change the value of max_capacity, and then run terraform plan, it shows the autoscaling target is forced to new (it is going to be destroyed and re-added). But nothing will happen to autoscaling policy, which is supposed to be destroyed and re-added as well.

Why is it supposed to? Because in my practice, after terraform apply successfully (which destroys and re-adds the autoscaling target successfully), the autoscaling policy is gone automatically (if you login to aws console, you can see it's gone), so I have to run terraform apply again, the second time, and this time, it will add the autoscaling policy back.

Glad I found this. Noticed it today when dropping the min_capacity on our target. Checked the console later and noticed the policies were gone. Ran another terraform apply and saw they re-created.

We are also observing the same problem.

A small workaround for this that appears to be working correctly for my situation is just leveraging the target-id in the name arg for the policy:

name = "${aws_appautoscaling_target.xxxxx.id}-scale-down"

This seems to force a destroy and rebuild.

@alexcallihan Awesome. Thank you very much. Can confirm that work-around works as well.

I believe #968 is a duplicate of this issue.

Verified that @alexcallihan workaround makes apply work as expected when autoscaling target properties are adjusted

We just merged a change to the aws_appautoscaling_target resource in master (releasing in v1.8.0 of the provider) that now supports updating the min/max attributes instead of recreating the target. This should help a lot with this situation!

This has been released in terraform-provider-aws version 1.8.0. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

Using v1.8.0 of the AWS provider plugin. We noticed that our ECS service would consistently have the AWS service linked IAM role AWSServiceRoleForApplicationAutoScaling_ECSService regardless of the custom role we gave it in the aws_appautoscaling_target resource. Thereby it wasn't idempotent - Terraform would always try to change the IAM role in aws_appautoscaling_target.

I could have used an "lifecycle_ignore" hook, so Terraform wouldn't manage the IAM role in the aws_appautoscaling_target . But rather than ignore it, this is what I came up with to fix the idempotency. Is there any gotchas anyone can think of ?

data "aws_iam_role" "ecs_service_autoscaling" {
name = "AWSServiceRoleForApplicationAutoScaling_ECSService"
}

resource "aws_appautoscaling_target" "service" {
max_capacity = "${var.max_instances}"
min_capacity = "${var.min_instances}"
resource_id = "service/myecscluster/${aws_ecs_service.service.name}"
role_arn = "${data.aws_iam_role.ecs_service_autoscaling.arn}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
}

I'm seeing the same thing; every apply tries to update the role_arn. Have you opened another issue, @christianclarke, or should this be re-opened, @bflad?

I can confirm your workaround solved it, but I'm not yet sure if it's side effect free 😄

@dendrochronology

This I think has already been raised. The role in question is a service linked role which AWS insist that your autoscaking service adopts

https://github.com/terraform-providers/terraform-provider-aws/issues/921

Hi,

This is still not working with Terraform 0.12 and unfortunately the above workaround from @alexcallihan doesn't seem to work anymore :(

In a nutshell, I have an ECS service with an autoscaling target and a couple of policies. Let's say I make a change that forces the recreation of the service/target: the autoscaling policy is still in the state file but it doesnt get recreated after recreating the ECS service and target. When I look in AWS, the servioce is there but the autoscaling policies and the target are just gone.

When I run terraform a second time, then they get recreated:

...
  # module.user-mgmt.aws_appautoscaling_policy.ecs_policy_down_cpu will be created
  + resource "aws_appautoscaling_policy" "ecs_policy_down_cpu" {
      + arn                = (known after apply)
      + id                 = (known after apply)
      + name               = "cpu-scale-down"
      + policy_type        = "StepScaling"
      + resource_id        = "service/Service_Finder_dptest/service-finder_dptest_user-mgmt"
      + scalable_dimension = "ecs:service:DesiredCount"
      + service_namespace  = "ecs"

      + step_scaling_policy_configuration {
          + adjustment_type         = "ChangeInCapacity"
          + cooldown                = 300
          + metric_aggregation_type = "Average"

          + step_adjustment {
              + metric_interval_upper_bound = "0"
              + scaling_adjustment          = -1
            }
        }
    }

  # module.user-mgmt.aws_appautoscaling_policy.ecs_policy_up_cpu will be created
  + resource "aws_appautoscaling_policy" "ecs_policy_up_cpu" {
      + arn                = (known after apply)
      + id                 = (known after apply)
      + name               = "cpu-scale-up"
      + policy_type        = "StepScaling"
      + resource_id        = "service/Service_Finder_dptest/service-finder_dptest_user-mgmt"
      + scalable_dimension = "ecs:service:DesiredCount"
      + service_namespace  = "ecs"

      + step_scaling_policy_configuration {
          + adjustment_type         = "ChangeInCapacity"
          + cooldown                = 60
          + metric_aggregation_type = "Average"

          + step_adjustment {
              + metric_interval_lower_bound = "0"
              + scaling_adjustment          = 1
            }
        }
    }

  # module.user-mgmt.aws_appautoscaling_target.ecs_target will be created
  + resource "aws_appautoscaling_target" "ecs_target" {
      + id                 = (known after apply)
      + max_capacity       = 20
      + min_capacity       = 2
      + resource_id        = "service/Service_Finder_dptest/service-finder_dptest_user-mgmt"
      + role_arn           = (known after apply)
      + scalable_dimension = "ecs:service:DesiredCount"
      + service_namespace  = "ecs"
    }
...

Any chance this could be looked into? Thanks!

My version:

$ terraform version
Terraform v0.12.4
+ provider.archive v1.2.2
+ provider.aws v2.19.0
+ provider.random v2.1.2
+ provider.template v2.1.2
+ provider.tls v2.0.1

Hi folks 👋 If you would like to report potentially lingering issues, please file a new bug report following the issue template.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings