Terraform-provider-aws: Updating ECS service capacity provider strategy replaces resource

Created on 18 Dec 2019  路  10Comments  路  Source: hashicorp/terraform-provider-aws

When attempting to updating an ecs_service.capacity_provider_strategy the resource is replaced. It should be possible to update the capacity provider strategy without replacing the ECS service with UpdateService https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateService.html

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

$ terraform -v
Terraform v0.12.18
+ provider.aws v2.42.0

Affected Resource(s)

  • aws_ecs_service

Expected Behavior

ECS service updates without replacing the resources

Actual Behavior

ECS service is recreated

Steps to Reproduce

  1. Create a aws_ecs_service resource with a capacity_provider_strategy
  2. terraform apply
  3. Change capacity_provider_strategy
  4. terraform apply

References

https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateService.html

bug servicecs

Most helpful comment

I've managed to workaround this using lifecycle

  lifecycle {
    ignore_changes = [
      capacity_provider_strategy
    ]
  }

All 10 comments

Yes, it must force a new deployment but not have to destroy/re-create the service:

aws ecs update-service --cluster democluster --service demoservice --capacity-provider-strategy capacityProvider=FARGATE,weight=0,base=2 capacityProvider=FARGATE_SPOT,weight=100 --force-new-deployment

@danieladams456 this works well enough if you're using FARGATE .. but there doesn't seem to be a pre-defined name for the "default" EC2 provider .. so you can't reset the service to regular EC2 mode without recreating it .. this might be a limitation of AWS right now .. I might be wrong here :)

@piotrb definitely works (just POC phase for now) but even in Fargate it destroys and recreates the service. That will cause downtime as opposed to just creating another deployment.

Terraform will perform the following actions:

  # aws_ecs_service.service must be replaced
-/+ resource "aws_ecs_service" "service" {
        cluster                            = "demo-service"
        deployment_maximum_percent         = 200
        deployment_minimum_healthy_percent = 100
        desired_count                      = 1
        enable_ecs_managed_tags            = true
        health_check_grace_period_seconds  = 60
      ~ iam_role                           = "aws-service-role" -> (known after apply)
      ~ id                                 = "arn:aws:ecs:us-east-1:XXXXXXXXXX:service/demo-service/demo-service" -> (known after apply)
      + launch_type                        = (known after apply)
        name                               = "demo-service"
        platform_version                   = "LATEST"
        propagate_tags                     = "SERVICE"
        scheduling_strategy                = "REPLICA"
      - tags                               = {} -> null
        task_definition                    = "arn:aws:ecs:us-east-1:XXXXXXXXXX:task-definition/demo-service:10"

      - capacity_provider_strategy { # forces replacement
          - base              = 0 -> null
          - capacity_provider = "FARGATE_SPOT" -> null
          - weight            = 100 -> null
        }
      - capacity_provider_strategy { # forces replacement
          - base              = 2 -> null
          - capacity_provider = "FARGATE" -> null
          - weight            = 0 -> null
        }
      + capacity_provider_strategy { # forces replacement
          + base              = 2
          + capacity_provider = "FARGATE"
          + weight            = 50
        }
      + capacity_provider_strategy { # forces replacement
          + capacity_provider = "FARGATE_SPOT"
          + weight            = 50
        }

      - deployment_controller {
          - type = "ECS" -> null
        }

        load_balancer {
            container_name   = "demo-service"
            container_port   = 8080
            target_group_arn = "arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXX:targetgroup/demo-service/c44b99f42280a4a7"
        }

        network_configuration {
            assign_public_ip = false
            security_groups  = [
                "sg-09d58bb97552d3d20",
            ]
            subnets          = [
                "subnet-52c7fd37",
                "subnet-bf891af7",
                "subnet-c7c61b9d",
            ]
        }

      + placement_strategy {
          + field = (known after apply)
          + type  = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Yep for sure, just saying it doesn't work in all cases, so the fix isn't to just remove the flag from the field .. it has to be a bit more than that

We also observe flip-flopping of capacity_provider_strategy on every deploy, without any changes to capacity_provider_strategy. I.e. without any changes to our template, on every deploy it shows

      - capacity_provider_strategy { # forces replacement
          - base              = 0 -> null
          - capacity_provider = "FARGATE_SPOT" -> null
          - weight            = 1 -> null
        }

EDIT: More specifically, the flip-flopping happens when we set default_capacity_provider_strategy for aws_ecs_cluster, and then _don't_ set it for aws_ecs_service. The workaround is to remove default_capacity_provider_strategy from the cluster and add it to the service.

Just dog piling here, I can confirm I had the same issue. Using default_capacity_provider inside an aws_ecs_cluster block caused my service to be replaced on every run (with no changes). I had the same output:

      - capacity_provider_strategy { # forces replacement
          - base              = 0 -> null
          - capacity_provider = "my-awesome-cap-provider" -> null
          - weight            = 1 -> null
        }

I had no capacity provider set for the service and used the workaround mentioned by @dinvlad . Removing default_capacity_provider from aws_ecs_cluster and adding capacity_provider_strategy to my aws_ecs_service block caused it to start working properly and not try to recreate the service every time.

I'm super new to terraform so it could be something about my configuration. I'll be happy to post more details if needed.

Terraform v0.12.24

  • provider.aws v2.68.0

We also observe flip-flopping of capacity_provider_strategy on every deploy, without any changes to capacity_provider_strategy. I.e. without any changes to our template:

This seems to be happening for the services created _after default_capacity_provider has been specified for the cluster_. e.g. if you create cluster _with no default_capacity_provider specified_, create service and then return back and update cluster to include default_capacity_provider subsequent plans are not showing perpetual diff for the service. At least, this is how I worked the issue around.

Hope this helps.

I've managed to workaround this using lifecycle

  lifecycle {
    ignore_changes = [
      capacity_provider_strategy
    ]
  }

This is causing us some issues given that we can't move from one capacity provider to another (as we recycle ASGs) without Terraform reporting that it's going to destroy the service. This isn't required when using the API unless the service didn't start life without a capacity provider.

To me, this seems like a fundamental breaking issue if one cannot recycle the underlying capacity provider without having some kind of service outage.

Looks like the resource needs a customize diff function adding to support the dynamic nature of whether a service can be updated in place or requires replacing, depending on whether the service has a capacity provider already or not.

Was this page helpful?
0 / 5 - 0 ratings