Terraform v0.11.7
+ provider.aws v1.14.1
+ provider.dnsimple v0.1.0
+ provider.template v1.0.0
(relevant parts only)
data "template_cloudinit_config" "userdata_config" {
gzip = true
base64_encode = true
# Regular userdata script
part {
content_type = "text/x-shellscript"
content = "${data.template_file.setup_script.rendered}"
}
# cloud-boothook
part {
content_type = "text/cloud-boothook"
content = "${data.template_file.cloud_boothook.rendered}"
}
}
resource "aws_launch_configuration" "ecs" {
name_prefix = "${title(var.environment)} ECS-"
image_id = "${var.image_id}"
instance_type = "${var.instance_type}"
security_groups = ["${aws_security_group.ecs.id}"]
iam_instance_profile = "${aws_iam_instance_profile.ecs.name}"
key_name = "${var.environment}"
associate_public_ip_address = true
user_data = "${data.template_cloudinit_config.userdata_config.rendered}"
}
(need to figure out if this contains sensitive info)
The contents of the user_data scripts hasn't changed, so Terraform should not plan to do anything.
Terraform wants to change the user_data field for the AWS launch configuration, forcing a new resource. In terraform plan, I'd see something like
user_data: "fedbca..." => "abcdef..." (forces new resource)
Actually, right now it's wanting to create a new launch configuration, even though it already exists (rather than destroying and creating the existing one, which is what it normally tries to do).
Even if I apply the changes, the next time I run plan it will keep thinking that it needs to change the "user_data" field, and the old and new hash will still be the same as before.
terraform planI had no issue on earlier versions of the AWS provider (v1.6), but this started happening when I upgraded to v1.14.1.
UPDATE: I went back and downloaded a bunch of versions of the AWS provider, and this issue did not exist in v1.10, v1.11, and v1.14, so this is due to something that changed in the latest version of the provider.
Not sure if it's related to https://github.com/terraform-providers/terraform-provider-aws/issues/4056 because I'm not actually getting any errors, it's just doing the wrong thing.
Actually, right now it's wanting to create a new launch configuration, even though it already exists (rather than destroying and creating the existing one, which is what it normally tries to do).
This seems to happen if someone runs terraform apply with whatever Terraform thinks it needs to do, but then does not commit the tfstate changes. So eg. if my coworker ran terraform apply but forgot to commit the tfstate changes, even if nothing else changed the terraform state seems to get out of sync, and then it thinks that it needs to make a new resource. I guess that's because the tfstate has some name for the launch configuration that include the timestamp suffix, but what actually exists is a totally different name with a different timestamp. So it's not directly related to this issue, but it becomes more problematic because it's easy to do this, think nothing changed, and then not commit the tfstate changes.
After narrowing it down and figuring out that the bug was introduced in v1.14.1, my suspicion is that it has to do with this PR, which is the only one in the changelog that relates to launch_configurations? https://github.com/terraform-providers/terraform-provider-aws/pull/2800
My guess is that maybe the hash function is somehow wrong (maybe hashing the wrong thing? maybe before gzip or base64 or something?), so it's constantly thinking that something changed when in fact nothing changed.
We're also seeing the same issue on two different codebases, both using 1.14.1
Is your user data Base64/GZip encoded? We currently have a separate attribute for handling this with aws_instance resource and maybe we need to do the same with aws_launch_configuration:
Support for aws_launch_configuration resource user_data_base64 has been merged in via #4257 and will release with v1.16.0 of the AWS provider, likely mid next week. If using that attribute when necessary doesn't solve this issue, we can reopen it.
I'm using both base64 and gzip (as noted in the config file that I pasted), so would that fix work for me? I guess I'll see when v1.16.0 is released?
so we would need to use a different attribute if the user data is base64? This does not seem like a good fix, our user data is generated by cloudinit and whether it’s base64/gzipped or not is specified by that, so would need to separately manually change what attribute is called if we change the cloudinit resource. Better to fix the regression.
As reported in the PR, the issue is actually fixed in that PR without needing the new attribute. So I'm happy the issue was fixed, but the extra attribute just seems unnecessary.
Hi all, sorry about writing to a closed issue but I can see this happening and this thread is my closest find. I basically get what @ibrahima with slight difference that the user data is to be changed to "", which is the actuall state of the property:
user_data: "fedbca..." => "" (forces new resource)
The strange thing is that I don't even have the instance in an autoscaling group ie. not launch config..
We use provider.aws v1.9.0, Terraform v0.11.2 and remote state in S3 managed by DynamoDB. I can see correct changes to the state file after some taint/untaint commands I tried to resolve our problem so the remote backend is configured OK. We verified that the user data is empty by:
logging into the instance and can see no user data with this command:
curl http://169.254.169.254/latest/user-data
got the user data with python/boto3 call and can see that the user data is empty.
Any help greatly appreciated
@marobabic the issue was fixed in provider.aws version 1.16, so you'll need to upgrade to see if it fixes your issue.
Thanks for the answer @ibrahima, the upgrade helped
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!