Terraform v0.12.24
+ provider.archive v1.3.0
+ provider.aws v2.54.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.template v2.1.2
locals {
hyphenized_name = replace(var.cluster_name, "/\\s+/", "-")
capacity_provider_name = "${local.hyphenized_name}-${random_id.capacity_provider.b64_url}"
}
resource "aws_ecs_cluster" "cluster" {
name = local.hyphenized_name
capacity_providers = [local.capacity_provider_name]
setting {
name = "containerInsights"
value = "enabled"
}
}
data "aws_ami" "ecs-optimized-ami" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn-ami-*-amazon-ecs-optimized"]
}
}
resource "aws_launch_configuration" "ecs" {
image_id = data.aws_ami.ecs-optimized-ami.image_id
instance_type = "t3a.medium"
security_groups = var.security_group_ids_for_ec2_instances
iam_instance_profile = aws_iam_instance_profile.ecs.name
user_data_base64 = base64encode(templatefile("${path.module}/ecs-user-data.sh", {
cluster_name = aws_ecs_cluster.cluster.name
}))
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "ecs" {
name = "${local.hyphenized_name}-scaling-group"
launch_configuration = aws_launch_configuration.ecs.name
min_size = var.minimum_instances
max_size = var.maximum_instances
desired_capacity = var.desired_instance_count
vpc_zone_identifier = var.subnet_ids
health_check_type = "ELB"
health_check_grace_period = 30
}
/*
aws_ecs_capacity_provider cannot be deleted or updated so we need a unique ID
in case of destroy or update so we can create a new one
https://github.com/aws/containers-roadmap/issues/632
*/
resource "random_id" "capacity_provider" {
byte_length = 16
keepers = {
auto_scaling_group_arn = aws_autoscaling_group.ecs.arn
}
}
resource "aws_ecs_capacity_provider" "capacity_provider" {
name = local.capacity_provider_name
auto_scaling_group_provider {
auto_scaling_group_arn = random_id.capacity_provider.keepers.auto_scaling_group_arn
managed_scaling {
status = "ENABLED"
minimum_scaling_step_size = 1
maximum_scaling_step_size = 1
target_capacity = 75
}
}
}
Apply works.
Error: Cycle: module.engine.module.main_ecs_cluster.data.template_file.ecs_instance_policy, module.engine.module.main_ecs_cluster.aws_iam_policy.ecs-instance-policy, modul
e.engine.module.main_ecs_cluster.aws_iam_role_policy_attachment.ecs_role, module.engine.module.main_ecs_cluster.aws_iam_instance_profile.ecs, module.engine.module.main_ecs
_cluster.aws_launch_configuration.ecs, module.engine.module.main_ecs_cluster.aws_autoscaling_group.ecs, module.engine.module.main_ecs_cluster.random_id.capacity_provider,
module.engine.module.main_ecs_cluster.local.capacity_provider_name, module.engine.module.main_ecs_cluster.aws_ecs_cluster.cluster
terraform applyI think the capacity_providers argument should not be in the aws_ecs_cluster resource. Since it uses the PutClusterCapacityProviders API method, using a new resource would allow to reference the capacity provider without creating a cycle.
It's easy to create a cycle when using capacity providers, since aws_ecs_capacity_provider references the aws_autoscaling_group arn, which references an aws_launch_configuration, which references the aws_ecs_cluster because we need to add the cluster name in the user_data. Therefore, we cannot add the capacity_provider when creating the ECS cluster.
It's easy to create a cycle when using capacity providers, since
aws_ecs_capacity_providerreferences theaws_autoscaling_grouparn, which references anaws_launch_configuration, which references theaws_ecs_clusterbecause we need to add the cluster name in the user_data. Therefore, we cannot add the capacity_provider when creating the ECS cluster.
@meriouma I've had the same issue. My workaround was to modify my launch config block as such:
resource "aws_launch_configuration" "ecs" {
image_id = data.aws_ami.ecs-optimized-ami.image_id
instance_type = "t3a.medium"
security_groups = var.security_group_ids_for_ec2_instances
iam_instance_profile = aws_iam_instance_profile.ecs.name
user_data_base64 = base64encode(templatefile("${path.module}/ecs-user-data.sh", {
- cluster_name = aws_ecs_cluster.cluster.name
+ cluster_name = local.hyphenized_name
}))
lifecycle {
create_before_destroy = true
}
}
This works since the ECS cluster _name_ does not contain a random uid when it is created, such as an ASG. If you recreate an ASG, you will get a new ARN, but for an ECS cluster, the name/arn is predictable/the same.
Hope this can help until the provider is fixed.
This workaround works - thanks for that - although it does violate one principle of good IAC because it creates a pair of resources which are dependent on one another without the dependency being represented in the code or understood by Terraform. It means Terraform might be more prone to creating badly ordered plans.
For that reason, the workaround is acceptable temporarily but I think it does ultimately need to be fixed rather than just worked around.
Most helpful comment
I think the
capacity_providersargument should not be in theaws_ecs_clusterresource. Since it uses thePutClusterCapacityProvidersAPI method, using a new resource would allow to reference the capacity provider without creating a cycle.It's easy to create a cycle when using capacity providers, since
aws_ecs_capacity_providerreferences theaws_autoscaling_grouparn, which references anaws_launch_configuration, which references theaws_ecs_clusterbecause we need to add the cluster name in the user_data. Therefore, we cannot add the capacity_provider when creating the ECS cluster.