Terraform: aws_ecs_service tries to force new resource

Created on 17 Oct 2016  ยท  9Comments  ยท  Source: hashicorp/terraform

Hi,

I encountered a problem when upgrading Terraform 0.6.x to 0.7.x. aws_ecs_service couldn't be updated properly.

Terraform Version

0.6.12 to 0.7.x (checked with 0.7.0 and 0.7.6)

Affected Resource(s)

  • aws_ecs_service

    Terraform Configuration Files

data "terraform_remote_state" "common" {
    # in 0.6.x, this following line is written here
    # lifecycle { create_before_destroy = true }
    backend = "s3"
    config {
        bucket = "${var.common_s3_remote_backend_bucket}"
        key = "${var.common_s3_remote_backend_key}"
        region = "${var.common_s3_remote_backend_region}"
    }
}

resource "aws_elb" "main" {
    lifecycle { create_before_destroy = true }

    internal = "${var.elb_internal}"

    security_groups = ["${data.terraform_remote_state.common.sg_elbs_id}"]
    subnets = [ "${split(",", data.terraform_remote_state.common.subnet_ids)}" ]

    listener {
        instance_port = 9999
        instance_protocol = "http"
        lb_port = 80
        lb_protocol = "http"
    }

    listener {
        instance_port = 9999
        instance_protocol = "http"
        lb_port = 443
        lb_protocol = "https"
        ssl_certificate_id = "${var.ssl_certificate_id}"
    }

    access_logs {
        bucket = "${var.elb_access_logs_bucket}"
        bucket_prefix = "${var.elb_access_logs_bucket_prefix}"
        interval = "${var.elb_access_logs_interval}"
    }

    cross_zone_load_balancing = true
    idle_timeout = 400
    connection_draining = true
    connection_draining_timeout = 400

    tags {
        Name = "${var.elb_name}"
    }
}

resource "aws_ecs_service" "app_service" {
    lifecycle { create_before_destroy = true }
    name = "${var.ecs_service_name}"
    cluster = "${aws_ecs_cluster.test_cluster.id}"
    task_definition = "${aws_ecs_task_definition.test.arn}"
    desired_count = "${var.count_ecs_service}"
    iam_role = "${var.ecs_service_role}"

    load_balancer {
        elb_name = "${aws_elb.main.id}"
        container_name = "nginx"
        container_port = 80
    }
}

Output

Expected Behavior

I'm expecting there should be no change in aws_ecs_service.app_service.

Actual Behavior

An error is thrown. I cannot update my Terraform from 0.6.x to 0.7.x.

* aws_ecs_service.app_service: InvalidParameterException: Creation of service was not idempotent.
    status code: 400

Steps to Reproduce

I changed the Terraform configuration above, from using resource terraform_remote_state to data terraform_remote_state. And then, I tried to run terraform plan, which shows me the following changes:

-/+ aws_ecs_service.app_service
    cluster:                                   "arn:aws:ecs:ap-northeast-1:yyy:cluster/test_cluster" => "arn:aws:ecs:ap-northeast-1:yyy:cluster/test_cluster"
    deployment_maximum_percent:                "200" => "200"
    deployment_minimum_healthy_percent:        "50" => "50"
    desired_count:                             "2" => "2"
    iam_role:                                  "test_ecs_service_role" => "test_ecs_service_role"
    load_balancer.#:                           "" => "1" (forces new resource)
    load_balancer.1220750775.container_name:   "" => "nginx" (forces new resource)
    load_balancer.1220750775.container_port:   "" => "80" (forces new resource)
    load_balancer.1220750775.elb_name:         "" => "tf-lb-4lbrev56fng4bdl7spjb3pegcm" (forces new resource)
    load_balancer.1220750775.target_group_arn: "" => "" (forces new resource)
    name:                                      "test_app_service" => "test_app_service"
    task_definition:                           "arn:aws:ecs:ap-northeast-1:yyy:task-definition/**:175" => "arn:aws:ecs:ap-northeast-1:yyy:task-definition/**:175"

Then, I tried to run terraform apply and the error above is thrown. I confirmed that there is no changes in my ECS cluster, as the ELB is still registered there.

screen shot 2016-10-17 at 7 38 12 pm

Additional Information

0.6.12 (Terraform state file - ECS Service section):

                "aws_ecs_service.app_service": {
                    "type": "aws_ecs_service",
                    "depends_on": [
                        "aws_ecs_cluster.test_cluster",
                        "aws_ecs_task_definition.test_app",
                        "aws_elb.main"
                    ],
                    "primary": {
                        "id": "arn:aws:ecs:ap-northeast-1:yyy:service/test_app_service",
                        "attributes": {
                            "cluster": "arn:aws:ecs:ap-northeast-1:yyy:cluster/test_cluster",
                            "deployment_maximum_percent": "200",
                            "deployment_minimum_healthy_percent": "50",
                            "desired_count": "2",
                            "iam_role": "test_ecs_service_role",
                            "id": "arn:aws:ecs:ap-northeast-1:yyy:service/test_app_service",
                            "load_balancer.#": "1",
                            "load_balancer.3074216521.container_name": "nginx",
                            "load_balancer.3074216521.container_port": "80",
                            "load_balancer.3074216521.elb_name": "tf-lb-4lbrev56fng4bdl7spjb3pegcm",
                            "name": "test_app_service",
                            "task_definition": "arn:aws:ecs:ap-northeast-1:yyy:task-definition/test_app:**"
                        }
                    }
                }

0.7.6 (Terraform state file - ECS Service section):

                "aws_ecs_service.app_service": {
                    "type": "aws_ecs_service",
                    "depends_on": [
                        "aws_ecs_cluster.test_cluster",
                        "aws_ecs_task_definition.test_app",
                        "aws_elb.main"
                    ],
                    "primary": {
                        "id": "arn:aws:ecs:ap-northeast-1:yyy:service/test_app_service",
                        "attributes": {
                            "cluster": "arn:aws:ecs:ap-northeast-1:yyy:cluster/test_cluster",
                            "deployment_maximum_percent": "200",
                            "deployment_minimum_healthy_percent": "50",
                            "desired_count": "2",
                            "iam_role": "test_ecs_service_role",
                            "id": "arn:aws:ecs:ap-northeast-1:yyy:service/test_app_service",
                            "name": "test_app_service",
                            "task_definition": "arn:aws:ecs:ap-northeast-1:yyy:task-definition/test_app:**"
                        },
                        "meta": {},
                        "tainted": false
                    },
                    "deposed": [],
                    "provider": ""
                }

0.6.12 (Terraform state file - ELB section):

"primary": {
    "id": "tf-lb-4lbrev56fng4bdl7spjb3pegcm",
    "attributes": {
        ...
        "security_groups.#": "1",
        "security_groups.2164196586": "sg-349c5150",
        "source_security_group": "xxx-elbs",
        "source_security_group_id": "sg-349c5150",

0.7.6 (Terraform state file - ELB section):

"primary": {
    "id": "tf-lb-4lbrev56fng4bdl7spjb3pegcm",
    "attributes": {
        ...
        "security_groups.#": "1",
        "security_groups.2130770710": "sg-349c5150",
        "source_security_group": "yyy/xxx-elbs",
        "source_security_group_id": "sg-349c5150",

Am I doing something wrong here? Is there any workaround for this problem?

Thanks.

bug provideaws

Most helpful comment

I am using terrform : v0.9.2 and facing the same issue as you @aholbreich

All 9 comments

Same issue here, no workaround found yet other than destroying and recreating.

I've resolved this issue manually by adding back the following lines to aws_ecs_service (xxx is taken from terraform plan from the newest state):

                            "load_balancer.#": "1",
                            "load_balancer.xxx.container_name": "nginx",
                            "load_balancer.xxx.container_port": "80",
                            "load_balancer.xxx.elb_name": "tf-lb-4lbrev56fng4bdl7spjb3pegcm",

And increment the serial by 1. After that, I push it to my remote via terraform remote push and it seems fine now. I think Terraform lost its state during transition from 0.6.x to 0.7.x.

I'm having a similar issue. For me, seemed to only happen to services with a load balancer, tasks w/o are fine.

  1. I have each ECS task definition/service in its own module and I renamed all of the modules at the same time
  2. After running terraform apply Terraform proceeded to delete and re-create all the services
  3. 2 services that were attached to an ALB had to wait 5 minutes (default) before a new service could be created as services that are still draining can't be created
  4. terraform apply timed out and returned an error (InvalidParameterException: Unable to Start a service that is still Draining.)
  5. Re-run terraform apply but it got a different error (InvalidParameterException: Creation of service was not idempotent.)

For me, I had to go into the AWS console, select the 2 services I wanted to deleted, update the instantiation count to 0, delete the task, and re-run terrafrom apply.

As of writing this, I've also reduced deregistration_delay to 1 minute.

Suffering the same.

`

  • module.xxx-reporting.aws_ecs_service.main: 1 error(s) occurred:
  • aws_ecs_service.main: InvalidParameterException: Creation of service was not idempotent.
    status code: 400, request id: 178b349c-2674-11e7-8c3e-f9d93b1bc9c4 "xxxt-reporting"
    ...

`
and several more of that kind.

terraform version 0.9.3

The services are created in fact in the cluter and whired to loadbalncer as well, but this status persist.

I think this has to do with long provisioning times of ALB, even it's created it remains relatively long in initialization state.. Maybe this has affect.

i also set deregistration_delay to 60 but seem's not helping here.

I am using terrform : v0.9.2 and facing the same issue as you @aholbreich

Same issue. Manually setting service count in web console to 0, then deleting and re-applying unblocks.

This is still an issue in Terraform v0.9.5 on Terraform Enterprise.

This is still an issue in Terraform v0.12

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings