Can you please provide more information about what you are looking to accomplish? The provider already includes resources to manage ECS services (and task definitions themselves), which performs the equivalent of the CLI aws ecs run-tasks
:
I'm looking at http://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html
aws_ecs_service
requires to specify desired count of docker containers to run, which doesn't work with strategy to spread 1 container per ecs host.
A use case I have is that I have a one-off task that I want run and then terminate. I do not want it to be a service. For instance, a database migrator.
@karnauskas aws_ecs_service
should already support placement_constraints and placement_strategy to implement something like 1 container per ECS instance:
resource "aws_ecs_service" "example" {
# You can set this to the max size of an autoscaling group to ensure every instance gets the task, for example
desired_count = "${aws_autoscaling_group.example.max_size}"
# other configuration
# only 1 task per ECS instance
placement_constraints {
type = "distinctInstance"
}
}
@iwarshak in Terraform's current model of operation, it seems your use case described here is more like a task (pardon the pun) for a provisioner rather than something a provider resource would accomplish. Resources are generally to manage the full create/update/delete lifecycle of a piece of infrastructure so about an hour after your task finishes, Terraform would notice that the task no longer exists, since the ECS API will stop reporting the "stopped" task status and want to recreate it (restart the task in this case). Provisioners can be used for run-once behavior.
Check out the existing local-exec provisioner which might allow you to accomplish what you're trying to do. Maybe something like the following with the AWS CLI (very untested):
resource "aws_ecs_task_definition" "example" {
# your task configuration here
}
resource "null_resource" "ecs-run-task" {
provisioner "local-exec" {
# add other args as necessary: https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html
command = "ecs run task --task-definition ${aws_ecs_task_definition.example.arn}"
interpreter = ["aws"]
}
}
Although I could see the point for something like an aws_ecs_task
provisioner that makes it much cleaner to implement in that specific case.
Hi together,
from my opinion ecs runtask should not be a ressource, because it doesn't run daemon-like as a ecs_service. If i want to run something daemon-like, a ecs_service is the right choice.
But i also wan't to be able to run an ecs-task with terraform. In my case i wan't to start an task-definition after creating or upgrading an ecs_task_definition and before updating the ecs_service with the updated task.
For the task to run i wan't overwrite the command inside container-definition. F.e. the kong-api-gateways need's to run "kong migrations up" to execute database-migrations before starting the new version of a service.
In this case I also need to be able to wait for finishing the task an for having the exist-code '0'. Another requirement is to ensure, no duplicate running migration-task.
Currently i am doing this with an shell-script and aws cli. (check if it is already running, run it, wait for finish, check exit-code). This forces me to install aws-cli and some additional-dependencies in our terraform-dockerimage which blowthe size up.
So my proposol is to create an provisioner for this.
resource "aws_ecs_task_definition" "some_cool_service_task" {
# some task attributes here
provisioner "aws-runtask" {
task_defintion = "${aws_ecs_task_definition.some_cool_service_task}"
cluster = "${aws_ecs_cluster.cool_cluster.id}"
launch_type = "FARGATE"
overrides = "${file("runtask_overwrite.json")}"
network_configuration {
security_groups = ["${aws_subnet.cluster.subnet.*.id}"]
subnets = ["${aws_security_group.cluster.id}"]
assign_pulic_ip = false
}
prevent_multiple = true
started_by = "cool_service_migration"
wait_for = true
}
}
resource "aws_ecs_service" "some_cool_service" {
depends_on = ["aws_ecs_task_definition.some_cool_service_task"]
task_definition = "${aws_ecs_task_definition.some_cool_service_task.arn}"
# some service attributes here
}
I think the most parts of the example above are clear. Here some explanations for the new ones:
prevent_multiple (Boolean, Default: false)
If true, the provisioner should check if a task with the same "started_by"-value already runs. If yes, skip provisioning.
started_by (Optional, String, MaxLength 36, Required if prevent_multiple = true)
An identifier for the running task inside the cluster. RunTask-Docs.
The value can be used to filter the list of running tasks. (prevent_multiple LsitTasks-Docs.
wait_for (Boolean, Default: false)
If true the provisioner waits until the task finished running.
So guys, what do you think? If you like the concept and you wan't to have the provisioner inside this plugin, i will write an implementation for this and create an pull-request.
have a nice weekend, jochen
Another possibility is to integrate it as an data resource. It could be done like aws_lambda_invocation.
I think this "feature" request was fulfilled by https://www.terraform.io/docs/providers/aws/r/ecs_service.html#scheduling_strategy DAEMON. If I remember correctly, documentation at that time was suggesting to run task via command line on each of the ec2 host(s). Feel free to close issue.
Hi @karnauskas, @bflad, thanks for the attention on the issue.
I think you didn't get my described use-case in total.
The DAEMON sheduling_strategy is for running an task on each exactly one time on ec2 ecs-cluster instance and keep it running. The use-case here could be to listen on docker-events (e.g. registrator).
But the use-case of the proposed provisioner- or data-solution is to run and task (after updateting an task_definition) exectly one-time, stopping after the task is done and finaly update the ecs-task-definition.
An service-application typically has an database with an schema. If i update an task-definition (mostly the docker-image-version) the schema potentialy need's an database-schema-migration that is executed exactly one time and must run not multiple times in parallel.
With the current terraform and aws_provider i have 2 posibilities to do this:
From my perspective the provisioner will be the better solution because it's only executed, the the task_definition is updated. The data-resource will be executed on each terraform apply (and plan?) and this will cause unneeded executions. (the lambda_function data-resource has the same shortcoming)
I am not sure if a terraform-plugin supports resources and provisioners together in one plugin. If an resource-plugin can not contain an provisioner, a new project is needed for this.
For the data-resource-variant i already did an successful proof-of-concept implementation. But i don't know if you wan't to have this resource inside this project. If you wan't to have it, i cleanup (bring to production-level), write some tests, documentation and create an PR. So please let me know if you think this make's sense for the project.
If i have more time to deep-dive inside provisioner-plugin-development i will also try this. (because i think it's the better solution and it fit's better inside the terraform-concepts)
thanks and have a great day!
is there any solution yet for this? I want to achieve the same thing as @iwarshak is saying but no luck so far
@Dvelezs94 The local-exec provisioner works pretty well. I've just implemented a call to my certbot using it. Obviously a actual provisioner would be nicer and have no external dependecies. But for now, this works fine & runs _only_ when the hostname in my locals changes:
resource "null_resource" "call-certbot" {
triggers = {
hostname = local.hostname
}
provisioner "local-exec" {
command = <<EOC
aws ecs run-task --cluster ${var.ecs_cluster.name} --task-definition certbot --started-by "Terraform" \
--overrides '${jsonencode(local.certbot.overrides)}' --network-configuration '${jsonencode(local.certbot.network_config)}'
EOC
}
}
We also created a similar solution for prepping the database with a migration script:
data "template_file" "prep_db" {
count = var.prep_db ? 1 : 0
template = file("${path.module}/templates/prep_db.sh")
聽
vars = {
cluster_name = module.app_cluster.cluster_name
task_definition = aws_ecs_task_definition.app_db_init_container_def.arn
region = var.region
profile = var.aws_profile
}
}
聽
resource "local_file" "prep_db_script" {
count = var.prep_db ? 1 : 0
content = data.template_file.prep_db[0].rendered
filename = "/tmp/prep_db_${module.app-rds.this_db_instance_id}.sh"
}
聽
resource "null_resource" "prep_db_migrator" {
count = var.prep_db ? 1 : 0
聽
# Changes to the rendered text will cause re-provisioning.
triggers = {
rendered_text = data.template_file.prep_db[0].rendered
}
聽
provisioner "local-exec" {
command = "/bin/bash ${local_file.prep_db_script.filename}"
}
}
There is a previous ecs_task_definition resource that is created and the actual running of the task is managed by the local exec provisioner that will call ecs and wait for completion. Here is the actual script:
#/bin/bash
--
set -xe
聽
TASK_ARN=`aws ecs run-task --cluster ${cluster_name} --task-definition ${task_definition} --profile ${profile} --region ${region} \| jq -r '.tasks[].taskArn'`
echo "Watching task: $${TASK_ARN}"
aws ecs wait tasks-running --cluster ${cluster_name} --tasks "$${TASK_ARN}" --region ${region} --profile ${profile}
aws ecs wait tasks-stopped --cluster ${cluster_name} --tasks "$${TASK_ARN}" --region ${region} --profile ${profile}
It would be helpful to have a runtask datasource or resource that frees us from requiring aws and jq to be installed locally.
Any updates on this? I'm running my containers with fargate and we are going to make a move to TerraForm soon. In my case I also need to perform a one-time migration, ideally execute it as a runtask datasource or resource.
Hey Guys, any update here would be helpful.
袗ny update here would be appreciated :)
I had some issues to make local-exec provisioner to run with a different role. After some hours and a lot of try and fail. I manage to get on this full example.
locals {
some_service_migration = {
overrides = {
containerOverrides = [
{
name = var.service_name
command = [
"python",
"manage.py",
"migrate",
"--noinput",
]
}
]
}
network_config = {
awsvpcConfiguration: {
assignPublicIp = "DISABLED"
subnets = aws_ecs_service.some_service.network_configuration[0].subnets
securityGroups = aws_ecs_service.some_service.network_configuration[0].security_groups
}
}
}
}
resource "null_resource" "call-migrate" {
triggers = {
task_definition = aws_ecs_service.some_service.task_definition
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOF
set -e
CREDENTIALS=(`aws sts assume-role \
--role-arn ${var.aws_role} \
--role-session-name "migration-cli" \
--query "[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]" \
--output text`)
unset AWS_PROFILE
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID="$${CREDENTIALS[0]}"
export AWS_SECRET_ACCESS_KEY="$${CREDENTIALS[1]}"
export AWS_SESSION_TOKEN="$${CREDENTIALS[2]}"
aws ecs run-task \
--cluster ${aws_ecs_cluster.cluster.name} \
--task-definition ${aws_ecs_task_definition.some_service.family}:${aws_ecs_task_definition.some_service.revision} \
--launch-type FARGATE \
--started-by "Terraform" \
--overrides '${jsonencode(local.some_service_migration.overrides)}' \
--network-configuration '${jsonencode(local.some_service_migration.network_config)}'
EOF
}
}
I hope it helps someone until we have terraform data "aws_ecs_run_task"
Bump, this is needed
Most helpful comment
A use case I have is that I have a one-off task that I want run and then terminate. I do not want it to be a service. For instance, a database migrator.