Terraform v0.11.7
provider.aws: version = "~> 1.33"
aws_autoscaling_group
aws_launch_template
resource "aws_launch_template" "lt_node" {
name_prefix = "${format("lt-%s-node-", var.env)}"
image_id = "${data.aws_ami.amznami.id}"
instance_type = "${var.instanceType}"
key_name = "${format("kp-%s-%s", var.env, var.servicesKeypair)}"
vpc_security_group_ids = ["${data.aws_security_group.node_sg.id}", "${data.aws_security_group.tools_sg.id}"]
iam_instance_profile = ["${data.aws_iam_instance_profile.node_instance_profile.name}"]
user_data = "${base64encode(data.template_file.user_data.rendered)}"
block_device_mappings {
ebs {
delete_on_termination = false
}
}
network_interfaces {
associate_public_ip_address = false
}
credit_specification {
cpu_credits = "${var.cpuCreditsMode}"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "asg_node" {
name = "${replace(aws_launch_template.lt_node.name,"lt-","asg-")}"
vpc_zone_identifier = ["${data.aws_subnet_ids.tools_private_subnet_ids.ids}"]
load_balancers = ["${aws_elb.elb_node.id}"]
health_check_grace_period = 180
health_check_type = "ELB"
force_delete = false
termination_policies = ["OldestInstance"]
min_size = "${var.nodeMinHostsCount}"
max_size = "${var.nodeMaxHostsCount}"
min_elb_capacity = "${var.nodeMinHostsCount}"
desired_capacity = "${var.nodeMinHostsCount}"
launch_template = {
id = "${aws_launch_template.lt_node.id}"
version = "${aws_launch_template.lt_node.latest_version}"
}
lifecycle {
create_before_destroy = true
}
tag {
key = "Name"
value = "${format("ec2-%s-node", var.env)}"
propagate_at_launch = true
}
tag {
key = "Environment"
value = "${var.env}"
propagate_at_launch = true
}
tag {
key = "Owner"
value = "${var.commonOwnerTag}"
propagate_at_launch = true
}
depends_on = ["aws_launch_template.lt_node"]
}
Error: Error running plan: 1 error(s) occurred:
module.node.aws_autoscaling_group.asg_node: 1 error(s) occurred:
module.node.aws_autoscaling_group.asg_node: Resource 'aws_launch_template.lt_node' not found for variable 'aws_launch_template.lt_node.name'
Plan/Apply should update ASG with new configuration
Plan/Apply fails to calculate the launch_configuration -> launch_template change
terraform plan OR terraform applyIt seems to be like terraform fails to recognize that there should be a change from LC to LT and throws a non-descriptive error.
I have tried running this in an empty environment, as well as an existing environment, the error is just the same. The only difference between running this multiple times is that it mentions lt_node.name, lt_node.latest_version, or lt_node.id.
Maybe relates to this * #4570
Can replicate this on Terraform 0.10.7 and provider.aws: version 1.26 -> 1.36.
Have found this is improper error handling in the aws_launch_template resource. The resource is expecting iam_instance_profile to be an object.
@Yashiroo I was able to resolve the issue by changing iam_instance_profile to an object:
resource "aws_launch_template" "lt_node" {
name_prefix = "${format("lt-%s-node-", var.env)}"
image_id = "${data.aws_ami.amznami.id}"
instance_type = "${var.instanceType}"
key_name = "${format("kp-%s-%s", var.env, var.servicesKeypair)}"
vpc_security_group_ids = ["${data.aws_security_group.node_sg.id}", "${data.aws_security_group.tools_sg.id}"]
user_data = "${base64encode(data.template_file.user_data.rendered)}"
iam_instance_profile {
id = "${data.aws_iam_instance_profile.node_instance_profile.name}"
}
block_device_mappings {
ebs {
delete_on_termination = false
}
}
network_interfaces {
associate_public_ip_address = false
}
credit_specification {
cpu_credits = "${var.cpuCreditsMode}"
}
lifecycle {
create_before_destroy = true
}
}
Hi folks! 馃憢 Sorry for the unexpected behavior here. I believe there are two things happening here that make the problem more complicated than it should be.
TL;DR @afdezl is correct in that in the aws_launch_template resource, the iam_instance_profile argument is expecting a configuration block (under the hood, a single element list with a map). This error is sometimes being masked by some unexpected behavior in Terraform core.
On Terraform 0.11.8, it seems to output a configuration validation error correctly along with the Resource X not found error, but only when both the aws_launch_template resource and its reference are being created at the same time. Given this example configuration:
# Invalid configuration - do not use this in a real environment
terraform {
required_version = "0.11.8"
}
provider "aws" {
version = "1.36.0"
}
resource "null_resource" "test" {}
resource "aws_launch_template" "test" {
iam_instance_profile = ["${null_resource.test.id}"]
}
output "test" {
value = "${aws_launch_template.test.id}"
}
On first terraform apply:
2 error(s) occurred:
* output.test: Resource 'aws_launch_template.test' not found for variable 'aws_launch_template.test.id'
* aws_launch_template.test: iam_instance_profile.0: expected object, got string
On the second terraform apply it just reports the unhelpful error (from the debug logs we are able to see that the validation error should have been reported):
2018/09/19 11:05:55 [ERROR] root: eval: *terraform.EvalValidateResource, err: Warnings: []. Errors: [iam_instance_profile.0: expected object, got string]
2018/09/19 11:05:55 [ERROR] root: eval: *terraform.EvalSequence, err: Warnings: []. Errors: [iam_instance_profile.0: expected object, got string]
2018/09/19 11:05:55 [TRACE] [walkPlan] Exiting eval tree: aws_launch_template.test
2018/09/19 11:05:55 [DEBUG] plugin: waiting for all plugin processes to complete...
Error: Error running plan: 1 error(s) occurred:
* output.test: Resource 'aws_launch_template.test' not found for variable 'aws_launch_template.test.id'
Fixing that will likely a require some changes upstream in Terraform core, rather than anything to do with the AWS provider specifically. It seems similar to these earlier reported issues, when a resource is failing validation, it is preferring to return the invalid resource reference message instead of the resource validation error:
I would suggest commenting on and upvoting those for the latest updates on that side of this. 馃憤
I'm going to close this issue out as we have determined its a configuration error and we have a few upstream tracking issues relating to ensuring the validation errors are actually output along with Resource X not found errors.
Thanks @afdezl , it worked for me in this form:
iam_instance_profile {
name = "${data.aws_iam_instance_profile.node_instance_profile.name}"
}
As noted above, the configuration block syntax is the correct syntax for this. 馃憤
I was also able to confirm that Terraform 0.12 will now properly report errors when the configuration is not using the correct syntax:
# Invalid configuration to show configuration syntax error in Terraform 0.12
terraform {
required_providers {
aws = "2.15.0"
null = "2.1.0"
}
required_version = "0.12.2"
}
provider "aws" {
region = "us-east-2"
}
resource "null_resource" "test" {}
resource "aws_launch_template" "test" {
iam_instance_profile = [null_resource.test.id]
}
output "test" {
value = aws_launch_template.test.id
}
$ terraform apply
Error: Unsupported argument
on main.tf line 16, in resource "aws_launch_template" "test":
16: iam_instance_profile = [null_resource.test.id]
An argument named "iam_instance_profile" is not expected here. Did you mean to
define a block of type "iam_instance_profile"?
Locking this issue as its resolved. 馃槃
Most helpful comment
Can replicate this on Terraform 0.10.7 and provider.aws: version 1.26 -> 1.36.
Have found this is improper error handling in the
aws_launch_templateresource. The resource is expectingiam_instance_profileto be an object.@Yashiroo I was able to resolve the issue by changing
iam_instance_profileto an object: