> terraform --version
Terraform v0.6.16-dev (813f2ca7085ce3dfce4e8c30dde53c2316cf4429)
Terraform template
resource "aws_security_group" "security_group" {
"description" = "Not the Description in AWS"
...
ingress {
"from_port" = 22
"to_port" = 22
"protocol" = "tcp"
"cidr_blocks" = ["0.0.0.0/0"]
}
}
Terraform state
{
"version": 1,
"serial": 1,
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {
"aws_security_group.security_group": {
"type": "aws_security_group",
"primary": {
"id": "sg-11111111"
}
}
}
}
]
}
Running
> terraform plan
-/+ aws_security_group.security_group
description: "Description in AWS" => "Not the Description in AWS" (forces new resource)
...
Plan: 1 to add, 0 to change, 1 to destroy.
This is correct, but I want to ignore descriptions.
So I added a lifecycle ignore_changes making the Terraform template
resource "aws_security_group" "security_group" {
"description" = "Not the Description in AWS"
...
lifecycle {
"ignore_changes" = ["description"]
}
ingress {
"from_port" = 22
"to_port" = 22
"protocol" = "tcp"
"cidr_blocks" = ["0.0.0.0/0"]
}
}
Now running
> terraform plan
~ aws_security_group.security_group
ingress.1111111111.self: "false" => "0"
Plan: 1 to add, 0 to change, 1 to destroy.
This is incorrect and I have no idea what an apply will do.
apply will it change the resource, or remove and add it?Having this issue on v0.6.16 as well.
I'm seeing a similar issue with v0.6.16, using lifecycle.ignore_changes with the aws_instance resource.
With aws_instance, using lifecycle.ignore_changes with an instance resource on it's seems to work fine, for example, if you place key_name in the ignore_changes list, and then the key name, this will behave properly:
resource "aws_instance" "test_instance" {
key_name = "changed-key"
...
lifecycle {
ignore_changes = ["key_name"]
}
}
teraform plan reports no changes.
However, as soon as there is a dependent resource, terraform plan seems to go awry. For example if you created the initial infrastructure using:
resource "aws_instance" "test_instance" {
key_name = "key"
...
lifecycle {
ignore_changes = ["key_name"]
}
}
resource "aws_route53_record" "test_dns_record" {
name = "test-instance"
records = ["${aws_instance.test_instance.0.private_ip}"]
...
}
And then change the key_name and run terraform plan, the result does not actually show the instance add/destroy in the plan, but it does show the DNS record being changed, and it seems to count the instance add/destroy in the final count:
> terraform plan
...
~ aws_route53_record.internal_dns_record
records.#: "" => "<computed>"
Plan: 1 to add, 1 to change, 1 to destroy.
Despite this, running terraform apply in this case does seem to behave properly:
> terraform apply
...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Though even after applying, running terraform plan continues to report the incorrect plan with the DNS record change the wrong counts.
I see the above issue with the ignore description on AWS security groups as well.
I confirm that I observe the exact same issue as described by @mikeocool
I'm also having this problem with aws_instance resource in v0.6.16.
I am no longer seeing this issue, at least in the case I described above with regards to aws_instance, using terraform 0.8.2.
I have added the
lifecycle {
ignore_changes = ["ingress"]
}
to the original security group else it will always prompt to remove the rules.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I'm seeing a similar issue with v0.6.16, using lifecycle.ignore_changes with the aws_instance resource.
With aws_instance, using lifecycle.ignore_changes with an instance resource on it's seems to work fine, for example, if you place key_name in the ignore_changes list, and then the key name, this will behave properly:
teraform planreports no changes.However, as soon as there is a dependent resource, terraform plan seems to go awry. For example if you created the initial infrastructure using:
And then change the key_name and run
terraform plan, the result does not actually show the instance add/destroy in the plan, but it does show the DNS record being changed, and it seems to count the instance add/destroy in the final count:Despite this, running
terraform applyin this case does seem to behave properly:Though even after applying, running
terraform plancontinues to report the incorrect plan with the DNS record change the wrong counts.