terraform --version
Terraform v0.11.11
+ provider.aws v1.52.0
+ provider.github v1.3.0
resource "aws_codedeploy_app" "exmaple" {
compute_platform = "ECS"
name = "exmaple-fargate"
}
resource "aws_codedeploy_deployment_group" "example" {
app_name = "${aws_codedeploy_app.example.name}"
deployment_config_name = "CodeDeployDefault.ECSAllAtOnce"
deployment_group_name = "example-fargate-deployment-group"
service_role_arn = "${data.terraform_remote_state.iam.code_deploy.example_iam_arn}"
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "STOP_DEPLOYMENT" # The value here was "CONTINUE_DEPLOYMENT" before I change
wait_time_in_minutes = 30 # did not set before I change
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = 30
}
}
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
ecs_service {
cluster_name = "${data.terraform_remote_state.ecs.example.cluster_name}"
service_name = "${data.terraform_remote_state.ecs.example.service_name}"
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = ["${data.terraform_remote_state.alb.example-alb-listener-https}"]
}
target_group {
name = "${data.terraform_remote_state.alb.example-tg-A.name}"
}
target_group {
name = "${data.terraform_remote_state.alb.example-tg-B.name}"
}
test_traffic_route {
listener_arns = ["${data.terraform_remote_state.alb.example-alb-listener-blue-green}"]
}
}
}
}
My debug output has lots of info that I do wanna show here.
If need it, tell me and I may write example so that I can put here.
Terraform should work without any error.
When I changed with Terraform, it failed.
However, I changed the AWS Code Deploy settings from AWS console as it matches to my Terraform code and executed terraform apply
it succeeded.
terraform apply
to AWS Code Deploy resources.(in my case resources have already created)
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
terraform apply
( It'll fail )terraform apply
(Refreshing state lines are deleted because these have info that I do not wanna show here)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_codedeploy_deployment_group.example
blue_green_deployment_config.0.deployment_ready_option.0.action_on_timeout: "CONTINUE_DEPLOYMENT" => "STOP_DEPLOYMENT"
blue_green_deployment_config.0.deployment_ready_option.0.wait_time_in_minutes: "0" => "30"
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions in workspace "dev"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_codedeploy_deployment_group.example: Modifying... (ID: 51b781fe-88dd-458d-b626-fc83cb2cff3f)
blue_green_deployment_config.0.deployment_ready_option.0.action_on_timeout: "CONTINUE_DEPLOYMENT" => "STOP_DEPLOYMENT"
blue_green_deployment_config.0.deployment_ready_option.0.wait_time_in_minutes: "0" => "30"
Error: Error applying plan:
1 error(s) occurred:
* aws_codedeploy_deployment_group.example: 1 error(s) occurred:
* aws_codedeploy_deployment_group.example: InvalidAutoScalingGroupException: For ECS deployment group, autoScalingGroups can not be specified
status code: 400, request id: a3c22ed5-168e-11e9-860d-abf465a0dc05
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
terraform apply
( It'll success )Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
I've observed that even trivial changes to deployment_ready_option
result in this same error: InvalidAutoScalingGroupException: For ECS deployment group, autoScalingGroups can not be specified
.
Destroying and recreating my entire environment does allow me to set up STOP_DEPLOYMENT
etc.
Looking at the Terraform trace log I was able to see that it is attempting and failing to updateDeploymentGroup
but there is no request body shown for that request and I don't know how to make it show in the log. I'm assuming it exists and I just can't see it.
I was able to edit these properties with the AWS console so it doesn't appear to be an AWS limitation. The console is successfully calling updateDeploymentGroup
.
We're facing the same issue and it's a real blocker. Is there any fix or workaround to this?
We faced also this problem while we are testing this feature.. We need an urgent hotfix for this feature.
I am experiencing the same issue. The error appears after the first deployment is created. Seems like it breaks after refreshing the state.
Line Involving DeploymentGroups and AutoScalingGroups
When I checked the code, this line seems to be the root cause of the issue. There'll probably another condition which won't involve AutoScalingGroups for ECS Blue/Green deployments.
I want to contribute to project and have this bug fixed. Can some of the maintainers help me and show me the right direction for contribution?
Thanks @neocorp! Looking at that line, I figured out some workaround:
lifecycle {
ignore_changes = ["blue_green_deployment_config"]
}
Although it would be better to have a proper fix, which implies an additional check that autoscaling_groups
is not included for ECS deployment groups.
I am also having this issue.
same issue:(
Setting the lifecycle to ignore changes won't work for my case since I actually want to be able to change the behavior of blue_green_deployment_config from time to time without destroying the whole thing.
My current work around to this issue is to only destroy the deployment group resource then recreate it with new action_on_timeout value:
e.g. terraform plan -destroy -target=aws_codedeploy_deployment_group.example ...
One caveat is that the 'CONTINUE_DEPLOYMENT' action will fail if 'wait_time_in_minutes' variable is set to non zero value. I just submitted doc update issue here regarding that: (https://github.com/terraform-providers/terraform-provider-aws/issues/9653)
same issue here :/
Having the same issue.
Same issue here as well.
Same issue as well
Having this issue too
+1
Same issue after updating termination_wait_time_in_minutes value from 5 to 3
Just encountered this issue as well after updating the following
# Original
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "STOP_DEPLOYMENT"
wait_time_in_minutes = 5
}
}
# Update
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
}
}
Edit: It might be worth noting that I was, however, able to make the desired changes in the AWS console
One workaround that works for me is to delete all deployment groups by hand before rolling out a change in config.
But I have many of them, one for each stage in each project, so that's not great.
For those still waiting for a fix you can add a +1 on the PR #11885 in order to get it merged 😉
The fix for this has been merged and will release with version 2.56.0 of the Terraform AWS Provider, later this week. Thanks to @ImFlog for the implementation. 👍
This has been released in version 2.56.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
same issue here :/