Terraform-provider-aws: aws_ecs_service InvalidParameterException: Creation of service was not idempotent

Created on 14 Nov 2017  路  27Comments  路  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @simoneroselli as hashicorp/terraform#16635. It was migrated here as a result of the provider split. The original body of the issue is below._


Terraform Version

v0.10.8

Hi,

terraform is failing on modify the "placement strategy" for ECS/Service resources. Since such value can only be set at Service creation time, the expected beahviour would be "destroy 1, add 1", like the terraform plan correctly reports. However on terraform apply it fails.

Fail Output

Error: Error applying plan:

1 error(s) occurred:

  • module..aws_ecs_service.: 1 error(s) occurred:

  • aws_ecs_service.main: InvalidParameterException: Creation of service was not idempotent.
    status code: 400, request id: xxxxxxxxxxxxxxxx "..."

Expected Behavior

destroy service, add service.

Actual Behavior

Failure of terraform without modification.

Steps to Reproduce

  1. Define a ECS/service with a placement strategy and apply
  2. Change the placement strategy values to something else
  3. terraform plan
    Plan: 1 to add, 0 to change, 1 to destroy.
  4. terraform apply
    InvalidParameterException: Creation of service was not idempotent
bug servicecs

Most helpful comment

Ran into the same issue, removing

lifecycle {
  create_before_destroy = true
}

From the ecs resource as @oanasabau noted worked for us too.

All 27 comments

Getting this on TF v0.9.11 without placement strategy configured.

I can confirm, I have the same issue. The workaround is to remove the service manually.

Same issue.

Same issue

Another workaround I found is to rename the service at the same time that the placement strategy is modified.

Same issue here

Same again

Same issue also, any conclusion?

Note that in the most recent provider versions, this has been changed to ordered_placement_strategy, it would be good to confirm if this bug still persists following that change.

Hello, just changed a service to use ordered_placement_strategy instead of placement strategy and the terraform apply fails.

```Terraform will perform the following actions:

-/+ module.xxxx.aws_ecs_service.api-gateway (new resource required)
id: "arn:aws:ecs:us-west-2:XXXX:service/api-gateway" => (forces new resource)
cluster: "cluster" => "cluster"
deployment_maximum_percent: "200" => "200"
deployment_minimum_healthy_percent: "100" => "100"
desired_count: "2" => "2"
health_check_grace_period_seconds: "180" => "180"
iam_role: "arn:aws:iam::xxxx:role/ecs_service_role" => "arn:aws:iam::xxxx:role/ecs_service_role"
launch_type: "EC2" => "EC2"
load_balancer.#: "1" => "1"
load_balancer.3428707558.container_name: "api-gateway" => "api-gateway"
load_balancer.3428707558.container_port: "8080" => "8080"
load_balancer.3428707558.elb_name: "" => ""
load_balancer.3428707558.target_group_arn: "arn:aws:elasticloadbalancing:us-west-2:xxxx:targetgroup/API/4440036037fbdee4" => "arn:aws:elasticloadbalancing:us-west-2:xxxx:targetgroup/API/4440036037fbdee4"
name: "api-gateway" => "api-gateway"
ordered_placement_strategy.#: "" => "1" (forces new resource)
ordered_placement_strategy.0.field: "" => "instanceId" (forces new resource)
ordered_placement_strategy.0.type: "" => "spread" (forces new resource)
placement_strategy.#: "1" => "0" (forces new resource)
placement_strategy.2750134989.field: "instanceId" => "" (forces new resource)
placement_strategy.2750134989.type: "spread" => "" (forces new resource)
task_definition: "arn:aws:ecs:us-west-2:xxxx:task-definition/api-gateway:58" => "${aws_ecs_task_definition.api-gateway_definition.arn}"


Result for terraform apply:

Error: Error applying plan:

1 error(s) occurred:

  • module.xxxx.aws_ecs_service.api-gateway: 1 error(s) occurred:

  • aws_ecs_service.api-gateway: InvalidParameterException: Creation of service was not idempotent.
    status code: 400, request id: 3524862f-6e38-11e8-87e1-b1ef6c6b4c93 "api-gateway"
    ```
    Provider version:
    Downloading plugin for provider "aws" (1.22.0)

same.

Getting this on the docker image hashicorp/terraform:light as well
@mrf It started happening for me when I switched from placement_strategy to ordered_placement_strategy

module.website.aws_ecs_service.webserver: Creating...
  cluster:                                   "" => "arn:aws:ecs:eu-west-2::cluster/ecs-cluster-prod"
  deployment_maximum_percent:                "" => "200"
  deployment_minimum_healthy_percent:        "" => "34"
  desired_count:                             "" => "2"
  iam_role:                                  "" => "arn:aws:iam:::role/ecs_iam_role_prod"
  launch_type:                               "" => "EC2"
  load_balancer.#:                           "" => "1"
  load_balancer.4258226585.container_name:   "" => "app-server"
  load_balancer.4258226585.container_port:   "" => "80"
  load_balancer.4258226585.elb_name:         "" => ""
  load_balancer.4258226585.target_group_arn: "" => "arn:aws:elasticloadbalancing:eu-west-2::targetgroup/prod-ecs-cluster-prod-web/3cb792881eee3a61"
  name:                                      "" => "webserver"
  ordered_placement_strategy.#:              "" => "1"
  ordered_placement_strategy.0.field:        "" => "memory"
  ordered_placement_strategy.0.type:         "" => "binpack"
  task_definition:                           "" => "arn:aws:ecs:eu-west-2::task-definition/webserver:375"

From what I can see, it's deciding that the existing service doesn't exist and tries to create it which Amazon isn't allowing. It should be modifying the existing service for sure

As it turns out in our case the problem was caused by create_before_destroy flag from the resource's lifecycle policy. After removing that the terraform apply succeeded.

I think the same issue applies if you have the load_balancer, and likely, the target_group_arn, to the ECS service as well, as those settings can only be applied when creating the service.

In my use case (just the load_balancer block, no ordered_placement_strategy block), the service gets provisioned properly, but its state never gets recorded, not even partially. So, in subsequent TF runs, it said that it wants to add a brand new ECS service, but it would error out with the same "Creation of service was not idempotent" message.

same issue

Ran into the same issue, removing

lifecycle {
  create_before_destroy = true
}

From the ecs resource as @oanasabau noted worked for us too.

The best way I found is to change the name ( +1 @davidminor ) leaving however the lifecycle
create_before_destroy = true

In this way a new service is created without interrupting the service and the old one is deposed only when the new one is active.

I have same issue but got solved by rerunning terraform apply on the same resource. I think terraform cannot destroy and create the service at same time so it need a two steps resource apply or just remove the lifecycle would also solve the issue.

@bkate i had the same issue, terraform deleted the service but after launched this error:
"InvalidParameterException: Creation of service was not idempotent."

i rerunning the apply and that works fine

I'm running into the same problem, but I'm not getting a description of the error like others are, I've also tried enabling debug via export TF_LOG=DEBUG. Here is what my error looks like:

Error: Error applying plan:
1 error occurred:
        * aws_ecs_service.app: 1 error occurred:
        * aws_ecs_service.app: InvalidParameterException:  "testecsservice"

Edit: found the solution to my problem, the name in my task definition did not match the container_name variable in the aws_ecs_service resource's load balancer definition

I ran into this today and I'm wondering if it could be side-stepped with support for name_prefix. I want the new service to be created (and running okay) before removing the old one as it's still attached to the old load-balancer target group. If I allow Terraform to destroy the service resource before creating the new one, I'll have an outage. At least, that's my understanding of the situation.

        load_balancer {
            container_name   = "haproxy"
            container_port   = 80
            target_group_arn = "arn:aws:elasticloadbalancing:us-east-2:560758033722:targetgroup/pafnt20200108220931040600000001/87cb1ba8f5d4f6bc"
        }
      + load_balancer { # forces replacement
          + container_name   = "haproxy"
          + container_port   = 81
          + target_group_arn = "arn:aws:elasticloadbalancing:us-east-2:560758033722:targetgroup/stg-ui-api/ff397f52073bb72c"
        }
      + load_balancer { # forces replacement
          + container_name   = "haproxy"
          + container_port   = 82
          + target_group_arn = "arn:aws:elasticloadbalancing:us-east-2:560758033722:targetgroup/stg-ui-partner/f0ef791d2187d3a9"
        }
terraform -v
Terraform v0.12.20
+ provider.akamai v0.1.4
+ provider.aws v2.44.0
+ provider.azuread v0.6.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.template v2.1.2
+ provider.tls v2.1.1

Ran into it today. Indeed, if create_before_destroy is used, we need name_prefix instead of name.

Has anyone found a workaround of generating a new name? Tried to use random_id but not sure what to use for "keepers" section.

due to this issue and missing name_prefix I tried:

name = "myservice_${replace(timestamp(), ":", "-")}"

and create_before_destroy in the service. This helps particulary, but is does not have the effect of beeing zero downtime deployment. The service is now beeing recreated every terraform run.

Having the same issue here. Any good solution from the crowd so far? I think the support for zero-downtime deployment is essentially warranted

I had this issue when I renamed a directory a terraform module (of a fargate service that was already deployed) was located in and I tried to redeploy with that new directory name. After naming it back to the previous name, destroying the service, renaming the directory again to the new name and deploying I no longer had the issue.

Adding to the chorus suggesting that (with terraform 0.13.4, aws provider 3.11.0) this issue persists when _replacing_ a service in place.

  • The first tf apply will complete, the old resource will (eventually) be deposed and the creation will fail
module.foo.aws_ecs_service.service[0]: Destruction complete after 6m18s

Error: InvalidParameterException: Creation of service was not idempotent. "foo"
  • The second attempt at applying this will result in a traditional success

As such it's my belief there is some kind of a race condition or internal state issue that is not allowing the subsequent creation of a service _after_ the API returns from the deletion. Is there a means of injecting a manual "sleep" here? We would benefit from resolving this so our CI/CD pipeline can manage this transition rather than requiring "babysitting" through two deploys, with increased downtime.

Was this page helpful?
0 / 5 - 0 ratings