Terraform v0.11.8
+ provider.aws v1.36.0
Here's a sanitized example from our actual HCL:
data "aws_ami" "ubuntu_xenial" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_launch_template" "my_launch_template" {
name = "name"
description = "desc"
disable_api_termination = false
ebs_optimized = true
iam_instance_profile {
arn = "${data.aws_iam_instance_profile.some_external_profile.arn}"
}
image_id = "${data.aws_ami.ubuntu_xenial.id}"
instance_initiated_shutdown_behavior = "terminate"
instance_type = "${var.instance_type}"
key_name = "${var.key_pair}"
monitoring {
enabled = true
}
vpc_security_group_ids = ["${var.some_security_group}"]
tag_specifications {
resource_type = "instance",
tags {
Name = "${var.name}"
Creator = "Terraform"
Autoscale = 1
}
}
# Must be pre-encoded in base64 in this case
user_data = "${base64encode(file("../resources/my-cloud-init"))}"
instance_market_options {
market_type = "spot"
# Other options in this block, except for max_price, apparently cause the template to be rejected by ASGs
# See https://github.com/terraform-providers/terraform-provider-aws/issues/5455
}
}
I don't have any sanitized debug output, if absolutely necessary If can try recreating this issue in a fully contrived manner.
When outputting what changed, the latest_version
should be incremented:
~ module.launch_templates.aws_launch_template.my_launch_template
latest_version: "3" => "4"
user_data: ...
However, the output instead shows the latest_version
resetting to 0
:
~ module.launch_templates.aws_launch_template.my_launch_template
latest_version: "3" => "0"
user_data: ...
user_data
or image_id
terraform plan
and terraform apply
show the latest_version
resetting to 0
instead of incrementing.Our launch templates are in a module, perhaps this is a factor?
We see the same behavior when other changes than user_data happen (e.g. if a new ubuntu AMI is published).
None that I'm aware of.
@ryanschneider I can reproduce the issue and working on a PR to fix this. But I'd just like to confirm that after the apply the latest_version
is actually incremented rather than set back to 0?
Correct, the latest_version
is incremented correctly, it's just a cosmetic issue displaying the diff, which caused us to balk before running apply
.
@ryanschneider @radeksimko I'm not really sure how to fix this issue. Would you have any pointers?
I tried taking a look for examples and I can see the same thing occurring for aws_mq_configuration
$ terraform apply
aws_mq_configuration.test: Refreshing state... (ID: c-f678b983-d325-4b46-9171-8b0826cfb26a)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_mq_configuration.test
description: "Example Configuration" => "Example Configuration 2"
latest_revision: "2" => "0"
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_mq_configuration.test: Modifying... (ID: c-f678b983-d325-4b46-9171-8b0826cfb26a)
description: "Example Configuration" => "Example Configuration 2"
latest_revision: "2" => "<computed>"
aws_mq_configuration.test: Modifications complete after 1s (ID: c-f678b983-d325-4b46-9171-8b0826cfb26a)
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Outputs:
mq_version = 3
This issue is still showing up for me on Terraform 0.11.13 with provider 2.3.0
This bug is affecting my ability to do blue/green ASG deployments with launch templates.
Autoscaling groups don't automatically cycle out instances when the launch configuration is updated. A popular terraform strategy is to make the name of the ASG based on the name of the launch config, so that when a new launch config is created, the ASG name also changes, triggering a recreation of the ASG. It looks something like:
resource "aws_autoscaling_group" "worker" {
name = "${aws_launch_configuration.worker.name}-asg"
lifecycle {
create_before_destroy = true
}
}
With launch_templates however, changes to the template only create a new version; the template Id and name are unchanged. In order to adopt the above strategy for replacing ASG using launch templates, I've tied the name of the ASG to the launch template name + version like:
resource "aws_autoscaling_group" "worker" {
name = "${aws_launch_template.worker.name}-${aws_launch_template.worker.latest_version}"
}
In theory this approach should work well but due to the version bump bug, the TF plan does not think it needs to recreate the ASG. After a TF apply, the launch template version increments, and a second TF plan indeed picks up that the ASG needs to be recreated.
Most helpful comment
This issue is still showing up for me on Terraform 0.11.13 with provider 2.3.0