Terraform-provider-aws: aws_ecs_capacity_provider_attachment

Created on 9 Jan 2020  路  9Comments  路  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Add the ability to attach an existing aws_ecs_capacity_provider to an existing aws_ecs_cluster resource. Right now we can only do it on the aws_ecs_cluster resource definition. Creating and Deleting this attachment resource would act like this cli functionality https://docs.aws.amazon.com/cli/latest/reference/ecs/put-cluster-capacity-providers.html

New or Affected Resource(s)

  • aws_ecs_capacity_provider
  • aws_ecs_cluster
  • aws_ecs_capacity_provider_attachment

Potential Terraform Configuration

resource "aws_ecs_capacity_provider_attachment" "attach" {
  ecs_cluster_arn = "arn:xxxxxx"
  capacity_providers = []
  default_capacity_provider_strategy = {
    ...
  }
}

References

new-resource servicecs

Most helpful comment

This would solve the issue I reported in https://github.com/terraform-providers/terraform-provider-aws/issues/11409 - indeed see that it matches what I suggested there as a possible solution.
The existing configuration method is flawed in that it forces the aws_ecs_cluster to have an indirect dependency on the aws_autoscaling_group, when in fact the dependency should be the reverse for destruction to work.

All 9 comments

Hi @carlosrodf 馃憢 Thank you for submitting this.

Can you please elaborate more on the use case that requires this second configuration method? In general, the support of two separate ways of configuring the same infrastructure can be confusing for operators and we need to make concessions with expected Terraform functionality to allow this (e.g. disabling drift detection in the ECS Cluster resource in this case). Another downside here is that unlike a few of our other "attachment" resources, only one of these would be able to be configured per ECS Cluster, which is the same as it exists today.

@bflad what I noticed is that once created with the current methods available in terraform there is no way to modify or even delete the resource because of the name restriction. When i modify any of the attributes in the aws_ecs_capacity_provider resource i get the following error:

Error: error creating capacity provider: ClientException: The specified capacity provider already exists. To change the configuration of an existing capacity provider,
 update the capacity provider.
        status code: 400, request id: 0ff57813-91b3-476e-aef4-4f4bb7727c6b

And even if i remove all the capacity provider related stuff from my terraform code i get the same error when it attempts to delete it.
After i deactivaded the resource using the aws console i was able to delete the capacity provider from terraform.

This would solve the issue I reported in https://github.com/terraform-providers/terraform-provider-aws/issues/11409 - indeed see that it matches what I suggested there as a possible solution.
The existing configuration method is flawed in that it forces the aws_ecs_cluster to have an indirect dependency on the aws_autoscaling_group, when in fact the dependency should be the reverse for destruction to work.

I'm facing the same issues right now. I am trying to integrate a capacity provider into my code but cannot get it to work reliably.

As @lukedd mentioned the ECS cluster has an indirect dependency on the auto scaling group and updates become impossible once the capacity provider is created because of the name restriction mentioned by @carlosrodf.

I think an attachment resource would make this a lot more flexible, right now the only solution I found was using a random_pet resource with capacity provider.

resource "random_pet" "capacity_provider" {}

resource "aws_ecs_cluster" "cluster" {
  provider = aws.customer

  name               = local.ecs_cluster_name
  capacity_providers = ["default-provider-${random_pet.capacity_provider.id}"]
}

resource "aws_ecs_capacity_provider" "cluster_capacity_provider" {
  provider = aws.customer

  name = "default-provider-${random_pet.capacity_provider.id}"

  auto_scaling_group_provider {
    auto_scaling_group_arn         = module.ecs_asg.this_autoscaling_group_arn
    managed_termination_protection = "DISABLED"
  }
}

I spoke too soon, using a random_pet will not work.

Any update on this feature? I know that the AWS API is a constraint here because it doesn't allow to update nor delete the CP but an attachment resource would make this more flexible

Another use case of this is: I cannot find where to associate the Capacity Provider with ECS cluster which created by aws_batch_compute_environment

Our use case is first create AWS batch compute environment with UNMANAGED type, which will automatically create the ECS cluster. Then, we create the Launch Template, AutoScaling Group. After that, go to the ECS Cluster created by batch compute, and create the Capacity Provider.
However, I cannot find a way to associate the ECS Cluster created by aws_batch_compute_environment with the resource aws_ecs_capacity_provider.

I try to use the resource aws_ecs_cluster with the ECS cluster name created by aws_batch_compute_environment, and aws_ecs_capacity_provider.resource.name but got error

Error: InvalidParameterException: The specified capacity provider strategy cannot contain a capacity provider that is not associated with the cluster. Associate the capacity provider with the cluster or specify a valid capacity provider and try again.

More info on #24615

This is the show stopper right now to using capacity providers in my setups, since the ASG and the ECS cluster get created in separate modules (for multiple ec2 deployment groups within one ecs)

This would solve a few workarounds I've had to implement.

Specially on use cases where we're working with legacy clusters that weren't created using Terraform.

Was this page helpful?
0 / 5 - 0 ratings