I have an aws_spot_instance_request
like this:
resource "aws_spot_instance_request" "seleniumgrid" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
availability_zone = "${var.zone}"
instance_initiated_shutdown_behavior = "terminate"
root_block_device {
volume_size = 100
volume_type = "gp2"
}
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
CA_TEAM = "${var.team}"
CA_ROLE = "${var.role}"
CA_SERVICE = "${var.service}"
}
}
The tags are being applied only to the spot request itself, not to the underlying instance. Is this an expected behavior? How can I change this?
Thanks!
I just ran in to this issue as well.
Definitely an issue, also in 0.6.8. Need a way to apply those tags to the new instance, or allow for an aws_instance_tag resource that will allow arbitrary tags to be set on ec2 instances.
Agreed!
Present in 0.6.8 as well
There is no AWS API that will do this via the Spot Instance request itself. We would need to trap the InstanceId
(which we do) and then call out to set the tags on that. I don't see a straightforward easy way to do this, but it shouldn't be too hard. I don't know that I'll get to this anytime soon however, but maybe someone in the community will pick it up.
This is the expected behavior as per AWS documentation, unfortunately.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#concepts-spot-instances-request-tags
I did find a solution. I created an IAM role with a policy that grants CreateTag permissions. I added an instance_profile to the role, and a ruleset allowing it to be assigned to the instances.
Using cloud init, I have a script that looks up the instances metadata, and using the result, tags itself based on arguments passed to the script.
There are a few moving parts, unfortunately. It helps if you're already familiar with cloud-init.
Another option: Use autoscaling groups. Tags applied to the launch_configuration will be passed on to the resulting spot instance.
This issue seems to be open for a year now. Any luck with the fix.
Until there is explicit support, we are getting by with the following user-data snippet to clone the spot-request tags to the instance.
Its somewhat simplistic and lacks error handling, but it does the job for us. It requires that aws cli / curl is available and that the instance has the proper IAM permission of course
#!/bin/bash
REGION=us-east-1
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
SPOT_REQ_ID=$(aws --region $REGION ec2 describe-instances --instance-ids "$INSTANCE_ID" --query 'Reservations[0].Instances[0].SpotInstanceRequestId' --output text)
if [ "$SPOT_REQ_ID" != "None" ] ; then
TAGS=$(aws --region $REGION ec2 describe-spot-instance-requests --spot-instance-request-ids "$SPOT_REQ_ID" --query 'SpotInstanceRequests[0].Tags')
aws --region $REGION ec2 create-tags --resources "$INSTANCE_ID" --tags "$TAGS"
fi
Issue still seems to be present in 0.9.4. Causing problems - will attempt the workaround solutions posted here
Workarounds are good but not always applicable.
Please consider adding explicit support.
Still a problem in 0.11.3. No errors, but tags seem to just be being disregarded.
@DanielMarquard did you the workaround above: https://github.com/hashicorp/terraform/issues/3263#issuecomment-284387578
@drorata I haven't, but it seems like it would work. I'll have Terraform execute that command in the CI script. Thanks.
Still an issue on 0.11.7, but the workaround posted above works well.
Here's my workaround. I use environment variables, but your use case might not require them.
resource "aws_spot_instance_request" "instance" {
wait_for_fulfillment = true
provisioner "local-exec" {
command = "aws ec2 create-tags --resources ${aws_spot_instance_request.instance.spot_instance_id} --tags Key=Name,Value=ec2-resource-name"
environment {
AWS_ACCESS_KEY_ID = "${var.aws_access_key}"
AWS_SECRET_ACCESS_KEY = "${var.aws_secret_key}"
AWS_DEFAULT_REGION = "${var.region}"
}
}
}
Hi, adopted above workaround for count >=1
Region is important if instances aren't in your default region.
provisioner "local-exec" {
command = "aws ec2 create-tags --resources ${self.spot_instance_id} --tags Key=Name,Value=ec2-name-${count.index} --region ${var.region}"
If you also want tags applied to your volumes:
provisioner "local-exec" {
command = "${join("", formatlist("aws ec2 create-tags --resources ${self.spot_instance_id} --tags Key=\"%s\",Value=\"%s\" --region=${var.region}; ", keys(self.tags), values(self.tags)))}"
environment {
AWS_ACCESS_KEY_ID = "${var.aws_access_key}"
AWS_SECRET_ACCESS_KEY = "${var.aws_secret_key}"
AWS_DEFAULT_REGION = "${var.region}"
}
}
provisioner "local-exec" {
command = "for eachVolume in `aws ec2 describe-volumes --region ${var.region} --filters Name=attachment.instance-id,Values=${self.spot_instance_id} | jq -r .Volumes[].VolumeId`; do ${join("", formatlist("aws ec2 create-tags --resources $eachVolume --tags Key=\"%s\",Value=\"%s\" --region=${var.region}; ", keys(self.tags), values(self.tags)))} done;"
environment {
AWS_ACCESS_KEY_ID = "${var.aws_access_key}"
AWS_SECRET_ACCESS_KEY = "${var.aws_secret_key}"
AWS_DEFAULT_REGION = "${var.region}"
}
}
Note: the above uses
jq
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Here's my workaround. I use environment variables, but your use case might not require them.