AWS Batch now supports the use of EC2 Launch Templates for Managed compute environments. I'd like to be able to use Terraform to manage this.
aws_batch_compute_environment
resource "aws_batch_compute_environment" "sample" {
compute_environment_name = "sample"
compute_resources {
instance_role = "${aws_iam_instance_profile.ecs_instance_role.arn}"
launch_template {
launch_template_id = "${aws_launch_template.sample.id}"
}
max_vcpus = 16
min_vcpus = 0
security_group_ids = [
"${aws_security_group.sample.id}",
]
subnets = [
"${aws_subnet.sample.id}",
]
type = "EC2"
}
service_role = "${aws_iam_role.aws_batch_service_role.arn}"
type = "MANAGED"
depends_on = ["aws_iam_role_policy_attachment.aws_batch_service_role"]
}
https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html
I would definitely like to see support for this in Terraform. That said, my company's developed a work-around (based partially on comments on https://github.com/terraform-providers/terraform-provider-aws/issues/3207 ) that might be useful for some people.
Basically, instead of defining a resource aws_batch_compute_environment
, we use a random_id
resource whose keepers are the details of our compute environment configuration. We also define a local-exec
provisioner for that random_id
resource, which will run aws batch create-compute-environment
with the necessary arguments to create a compute environment with a launch template.
We define a launch template with our custom settings:
resource "aws_launch_template" "batch" { ... }
And a template_file
data source that generates the command-line configuration string that will be passed to aws batch create-compute-environment
, including the id of the launch template, and whatever other settings we care about that previously would have lived on resource aws_batch_compute_environment
:
data "template_file" "batch_compute_environment_resources_string" {
template = "type=$${type},minvCpus=$${minvCpus},maxvCpus=$${maxvCpus},subnets=$${subnets},launchTemplate=$${launchTemplate},imageId=$${imageId}"
vars = {
type = "ec2"
minvCpus = 0
maxvCpus = <our max vcpu value>
subnets = <our subnets>
launchTemplate = "{launchTemplateId=${aws_launch_template.batch.id},version=${aws_launch_template.batch.latest_version}}"
imageId = <some-ami-id>
}
}
Finally, we create the random_id
with the local-exec provisioner:
resource "random_id" "batch" {
byte_length = 8
keepers = {
batch_compute_resources = "${data.template_file.batch_compute_environment_resources_string.rendered}"
}
provisioner "local-exec" {
command = "aws batch create-compute-environment --compute-environment-name compute-environment-${self.hex} --type MANAGED --state ENABLED --compute-resources '${data.template_file.batch_compute_environment_resources_string.rendered}'; sleep 5"
}
provisioner "local-exec" {
when = "destroy"
command = <<EOF
echo "\033[31;1m-------------------------LOOK AT THIS--------------------------\033[0m"
echo "Disabling compute environment: \033[31;1mcompute-environment-${self.hex}\033[0m";
aws batch update-compute-environment --compute-environment compute-environment-${self.hex} --state disabled
echo "Please delete this compute environment manually"
echo "\033[31;1m---------------------------------------------------------------\033[0m"
EOF
}
}
random_id
will create a new random 8-byte hex string every time its keepers change, and since the keeper is the rendered template-file
, this means that terraform will re-run the aws command in the local-exec provisioner every time one of those values changes. The aws command creates the batch_compute_environment
with the name compute-environment-${self.hex}
, and other resources that require that compute environment as an input can just reference that same name with the hex output of the random_id
.
Since we manually provision the compute environment, we also need to manually destroy it. However, we don't want to automatically destroy it, since when we run terraform apply
there might still be batch jobs making use of that compute environment, and we don't want to actually remove the resource until they are finished. Instead, we have the destroy provisioner use aws batch update-compute-environment
to disable the compute environment (which doesn't affect currently-running batch jobs), and also print out a message to the user running terraform apply
that a manual step with a specific id is necessary. We use ANSI shell color codes (the \033[
stuff) to print this message in bright red, to make it very visible among all the other terraform output.
Again, this is a hack with a manual step, and it would be preferable if terraform supported launch templates natively so we didn't have to resort to this, but it solved our problem.
What is the status of this please?
Support for configuring launch template information in the aws_batch_compute_environment
resource has been merged and will release with version 2.4.0 of the Terraform AWS Provider later today.
This has been released in version 2.4.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
cc @brainstorm
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
What is the status of this please?