Terraform v0.11.5
Please list the resources as a list, for example:
just create an ecs service without setting launch_type
, which should have default EC2
.
ecs is created and a second plan should show no change
second plan tries to replace the resource because of launch_type
change.
launch_type: "" => "EC2" (forces new resource)
TestAccAWSEcsService_withLaunchTypeEC2AndNetworkConfiguration
is failing because of the same error. It seems like aws is not returning default launch type.
Same issue using TERRAFORM_VERSION=0.11.2 and PROVIDER_AWS_VERSION=1.7.0 in eu-west-1.
Got the same issue even with setting launch_type
to EC2.
What's really strange is that it seems to work in some regions but not in others.
State is not updated in us-east-2, but it's updated fine in us-east-1 and us-west-2 (I went directly looking at state file to ensure of the values).
TF v 0.10.8, and tested many AWS providers, including the latest one, same issue.
Thanks !
Even if you set launch_type
to EC2
, it's not set in terraform state because aws doesn't return the attribute. We're in eu-west-1
with the issue. It looks like AWS is rolling out the change or even just testing out.
@loivis I was just about to create this issue. This is so much crap. Seeing this in 1.12.0
-/+ module.website.aws_ecs_service.service (new resource required)
id: "arn:aws:ecs:eu-west-1:0123456789:service/website" => <computed> (forces new resource)
cluster: "arn:aws:ecs:eu-west-1:0123456789:cluster/ct-backend-ecs-alpha" => "arn:aws:ecs:eu-west-1:0123456789:cluster/ct-backend-ecs-alpha"
deployment_maximum_percent: "200" => "200"
deployment_minimum_healthy_percent: "100" => "100"
desired_count: "1" => "1"
iam_role: "arn:aws:iam::0123456789:role/alpha/website-alb-role-alpha" => "arn:aws:iam::0123456789:role/alpha/website-alb-role-alpha"
launch_type: "" => "EC2" (forces new resource)
load_balancer.#: "1" => "1"
load_balancer.6411033.container_name: "website" => "website"
load_balancer.6411033.container_port: "80" => "80"
load_balancer.6411033.elb_name: "" => ""
load_balancer.6411033.target_group_arn: "arn:aws:elasticloadbalancing:eu-west-1:0123456789:targetgroup/website-alpha/f4d1ed5454453dec" => "arn:aws:elasticloadbalancing:eu-west-1:0123456789:targetgroup/website-alpha/f4d1ed5454453dec"
name: "website" => "website"
placement_constraints.#: "1" => "1"
placement_constraints.4150048827.expression: "attribute:stack == nodejs" => "attribute:stack == nodejs"
placement_constraints.4150048827.type: "memberOf" => "memberOf"
placement_strategy.#: "2" => "2"
placement_strategy.2750134989.field: "instanceId" => "instanceId"
placement_strategy.2750134989.type: "spread" => "spread"
placement_strategy.3619322362.field: "attribute:ecs.availability-zone" => "attribute:ecs.availability-zone"
placement_strategy.3619322362.type: "spread" => "spread"
task_definition: "arn:aws:ecs:eu-west-1:0123456789:task-definition/website-alpha:245" => "${aws_ecs_task_definition.service.arn}"
We have the same problem with terraform version 0.11.3
and aws provider: 1.13
. Its introducing down time on our deployments as it keeps destroying the ecs service and recreating it which takes longer that just updating task definition.
Hi,
Same here in eu-west-1 even with aws.provider in 1.11.0 or 1.12.0
In the meantime, you can set ignore_changes
to launch_type
. Just tested it, it works.
@sterfield whaaaaaat ? i've tested that, doesn't work for me :(
@joffreydupire getting the same issue with terraform 0.10.8 in eu-west-1
Hmm that's weird. I'm using 0.10.8 and the ignore_changes just fixed it. I can see in the state that the field is still empty, but TF is now happy.
🚫 🛑 🚫
EDIT:
Can confirm! This fixed it for now! Will have a look at the code later to see what is happening.
.
.
.
launch_type = "EC2"
.
.
lifecycle {
ignore_changes = ["launch_type"]
prevent_destroy = true
}
@Puneeth-n which tf_version are u using ?
@sterfield @Puneeth-n worked for us too, thanks.
@joffreydupire Terraform: 0.10.8
and terraform-provider-aws: 0.12.0
I'm wondering why this is happening today ? has there been any changes to the AWS API today?
@sterfield ignore_changes
works with latest terraform and aws provider.
Another workaround is setting launch_type = ""
explicitly for the aws_ecs_service
resource.
Still have launch_type: "" => "EC2" (forces new resource)
even with launch_type = "" and ignore_changes on launch_type,
So sad
@egarbi doesn't work here
@egarbi my assumption is launch_type
property doesn't exist for ECS services in these regions, so setting launch_type
to other than ""
doesn't work since Terraform will detect changes. At least, setting it to ""
helped in our case.
@katona got it, your workaround actually it worked for me too!!
@joffreydupire don't use launch_type
and ignore_changes
at the same time or you will be invalidating the value set
@egarbi i've try with launch_type = "" doesn't work, then try with ignore_changes = ["launch_type"] still the same issue
I confirm the same behavior with :
Edit : Please see the full thread, actually only launch_type set to "" seems to work.
However, as TF isn't persisting the launch_type in the state file and that we can't specifically set it to EC2, does that me that we are at risk to have them launched through fargate ?
Earlier, I checked our statefiles of another of our environments and they did have launch_type persisted with the same TF and AWS region.
@lenaing Fortunately not. According to the ecs_service AWS provider documentation:
launch_type - (Optional) The launch type on which to run your service. The valid values are EC2 and FARGATE. Defaults to EC2.
BTW, I am seeing these changes too in our loadbalancers without touching our code:
~ module.webclient.aws_alb.prod-extelb-webclient │not based on any personally identifiable information. To create
enable_cross_zone_load_balancing: "" => "false"
Am I the only one?
(Sorry for the pseudo-offtopic)
@devvesa "Defaults to EC2"... at the moment, right ? :)
I understand that AWS added this to preserve compatibility but they might want to change this default value anytime so...
@lenaing No it is defaulted in Terraform. See here
@loivis Looks like the aws_ecs_service is not updated when the new task definition is created.
The only solution that seems to work is to set launch_type = ""
and having no ignore_changes
resource "aws_ecs_task_definition" "service" {
count = "${var.enable}"
family = "${var.name}-${var.environment}"
container_definitions = "${var.container_definition}"
task_role_arn = "${var.task_role_arn}"
network_mode = "${var.network_mode}"
}
resource "aws_ecs_service" "service" {
count = "${var.enable * var.alb_listener_count > 0 ? 1 : 0}"
name = "${var.name}"
cluster = "${var.cluster_id}"
task_definition = "${aws_ecs_task_definition.service.arn}"
launch_type = "EC2"
desired_count = "${lookup(var.capacity, "desired", 1)}"
iam_role = "${aws_iam_role.ecs_lb_role.arn}"
deployment_maximum_percent = "${var.max_healthy_percent}"
deployment_minimum_healthy_percent = "${var.min_healthy_percent}"
placement_strategy = "${var.placement_strategy}"
placement_constraints = "${var.placement_constraints}"
load_balancer {
target_group_arn = "${aws_alb_target_group.service.arn}"
container_name = "${var.name}"
container_port = "${lookup(var.port_mappings[0], "containerPort")}"
}
depends_on = ["aws_iam_role.ecs_lb_role", "aws_ecs_task_definition.service"]
lifecycle {
ignore_changes = ["launch_type"]
prevent_destroy = true
}
}
@Puneeth-n it seems to be caused by:
lifecycle {
ignore_changes = ["launch_type"]
}
but if you use launch_type = ""
instead, it works fine.
EDIT:
Actually, if you define launch_type to not null AND specify ignore_changes it works as expected.
@s-maj The solution that works for me is launch_type = ""
and then a new task definition is created and ECS service updated. so
lifecycle {
ignore_changes = ["launch_type"]
}
is a bad idea
@Puneeth-n I don't see why.
There's a change on launch_type
, TF ignores that and this lead to No changes
.
If you have a problem where setting launch_type
as ignore_changes
doesn't update your service accordingly, that's a separate issue.
I don't see how ignoring this field, which is a default one both in AWS and TF can create any problem.
I have still explicitly defined launch_type
to EC2
because explicit > implicit, but it's just to be on the safe side.
@sterfield Me neither! Honestly ATM I don't have enough time to look into it. I just said what works for me.
The API definitely isn't returning launchType
for my services. Could we default to EC2
for these cases?
Presumably the SDK is returning us a blank string here:
The open PR here will likely be merged and released to handle this scenario: #4066
I was doing some investigative work on the different regions and responses before merging, but it does not seem like the information returned from the ECS API moved somewhere else in the response.
Ha, I missed that - exactly what I was suggesting in my previous comment.
Hi All,
AWS ECS team member here. We identified a transient scenario where the launchType field was not being returned in the describe service calls. We have addressed it and confirmed that it is is now appearing in all API responses.
Thanks,
Anirudh
Thanks @aaithal! I was just about to post that I could no longer reproduce this in eu-west-1 and us-east-2. It seems to be working as before.
@aaithal 🙏
Closing this issue out as the ECS API responses should now be working same as before. Please write in if there is reason to reopen this issue.
This is back for me in 1.40, after upgrading from 1.37
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Hi All,
AWS ECS team member here. We identified a transient scenario where the launchType field was not being returned in the describe service calls. We have addressed it and confirmed that it is is now appearing in all API responses.
Thanks,
Anirudh