0.10.8
resource "aws_codedeploy_deployment_group" "CodeDeploy" {
app_name = "${aws_codedeploy_app.app.name}"
deployment_group_name = "${var.group_name}"
service_role_arn = "${aws_iam_role.CodeDeploy.arn}"
deployment_config_name = "CodeDeployDefault.AllAtOnce"
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
target_group_info {
name = "${var.target_group_name}"
}
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
}
green_fleet_provisioning_option {
action = "COPY_AUTO_SCALING_GROUP"
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
}
}
autoscaling_groups = ["${var.autoscaling_groups}"]
}
create or update CodeDeploy Deployment Group with "Copy Auto Scaling Group" provision option
got error: InvalidBlueGreenDeploymentConfigurationException: Exactly one AutoScaling group must be specifed when selecting the COPY_AUTO_SCALING_GROUP green fleet provisioning option.
Hi!
Any updates regarding this topic?
+1
+1
I too just ran into this problem :/ I'm on terraform v0.11.2 and the aws provider isv1.5.0.
+1
Terraform v0.11.2
provider.aws v1.7.1
+1
Terraform v0.11.3
+ provider.aws v1.8.0
ex) terraform apply
aws_codedeploy_deployment_group.this: Modifying... (ID: 0e8c1269-b37d-418e-9ac1-fcf312fXXXXX)
blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes: "10" => "5"
Error: Error applying plan:
1 error(s) occurred:
* aws_codedeploy_deployment_group.this: 1 error(s) occurred:
* aws_codedeploy_deployment_group.this: InvalidBlueGreenDeploymentConfigurationException: Exactly one AutoScaling group must be specifed when selecting the COPY_AUTO_SCALING_GROUP green fleet provisioning option.
status code: 400, request id: 4fb082b1-0ce2-11e8-87bb-fba4b39XXXXX
Oh..._no
+1
Terraform v0.11.3
provider.aws 1.9
Anyone have any workarounds?
There are all kinds of problems with using Terraform for Codedeploy blue/green. A major one which I think is faced here. When you do a deployment Codeploy creates a new ASG and deleted the one that Terraform knows about.
I have tried doing a terraform refresh followed by importing the newly created ASG into the state, but I still get the error referenced above.
You have to import the ASG in any case to destroy the stack.
Just ran into this as well
Terraform v0.11.3
provider.aws 1.9
@JimtotheB
Just added a PR you might be interested in for tracking ASG resources created by codedeploy copy operation:
https://github.com/terraform-providers/terraform-provider-aws/pull/3591
I found if I remove ec2_tag_filter argument from my config the error would go away.
+1
I had this issue, and a crude workaround for this was manually setting the "Environment Configuration" to "Automatically copy Auto Scaling group" through AWS console, and doing a "terraform refresh".
Please review this pull request soon.
+1, hoping to keep deployment process automated
I'm guessing that this issue may be solved if #4678 is solved.
Hi all. This is an issue since last year, any news on this?
I am working on a fix for the _original_ issue reported here. Should have it pushed up in the next day or so. I apologize for any inconvenience. The issue reported in #4678 is a different matter altogether. I will address that separately.
@niclic How's that push coming along? I'm also experiencing this problem
@macnibblet I submitted the fix last week. It is #5827.
I can't say when it will be reviewed & merged (these things can sometimes take a frustratingly long time) but up-votes do help. Thank you.
This seemed to work OK for me without the fix using the auto scaling group id and putting a depends on the autoscaling group
The fix for the original issue has been merged and will release with version 1.39.0 of the AWS provider, likely middle of next week.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
I am working on a fix for the _original_ issue reported here. Should have it pushed up in the next day or so. I apologize for any inconvenience. The issue reported in #4678 is a different matter altogether. I will address that separately.