Having upgraded from terraform 6.8 to 6.14 terraform plan now reports:
~ aws_spot_instance_request.import_worker.0
source_dest_check: "false" => "0"
wait_for_fulfillment: "true" => "1"
The relevant lines from the resource are:
source_dest_check = false
wait_for_fulfillment = true
These lines have not changed since the last apply.
If we try terraform apply it reports an error:
* aws_spot_instance_request.import_worker.3: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.
Please include the following information in your report:
Terraform Version: 0.6.14
Resource ID: aws_spot_instance_request.import_worker.3
Mismatch reason: diff: Destroy; old: false, new: true
Diff One (usually from plan): *terraform.InstanceDiff{Attributes:map[string]*terraform.ResourceAttrDiff{"source_dest_check":*terraform.ResourceAttrDiff{Old:"false", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "wait_for_fulfillment":*terraform.ResourceAttrDiff{Old:"true", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}}, Destroy:false, DestroyTainted:false}
Diff Two (usually from apply): *terraform.InstanceDiff{Attributes:map[string]*terraform.ResourceAttrDiff{"instance_type":*terraform.ResourceAttrDiff{Old:"m3.xlarge", New:"m3.xlarge", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "instance_state":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "source_dest_check":*terraform.ResourceAttrDiff{Old:"false", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "subnet_id":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "public_dns":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "spot_type":*terraform.ResourceAttrDiff{Old:"persistent", New:"persistent", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "placement_group":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "spot_bid_status":*terraform.ResourceAttrDiff{Old:"fulfilled", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "spot_price":*terraform.ResourceAttrDiff{Old:"0.05", New:"0.05", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "private_ip":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "key_name":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"**redacted**", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "root_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "ephemeral_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "user_data":*terraform.ResourceAttrDiff{Old:"ad9ff2c8d7884be83b229e9b4ca134130ed9e927", New:"ad9ff2c8d7884be83b229e9b4ca134130ed9e927", NewComputed:false, NewRemoved:false, NewExtra:"**redacted**", RequiresNew:false, Type:0x0}, "spot_request_state":*terraform.ResourceAttrDiff{Old:"active", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "availability_zone":*terraform.ResourceAttrDiff{Old:"us-west-1c", New:"us-west-1c", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "private_dns":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "ebs_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "public_ip":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "tenancy":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "tags.#":*terraform.ResourceAttrDiff{Old:"1", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "vpc_security_group_ids.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "security_groups.#":*terraform.ResourceAttrDiff{Old:"1", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "spot_instance_id":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "wait_for_fulfillment":*terraform.ResourceAttrDiff{Old:"true", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "tags.Name":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"**redacted**", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "security_groups.**redacted**":*terraform.ResourceAttrDiff{Old:"**redacted**", New:"**redacted**", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}}, Destroy:true, DestroyTainted:false}
found similar issue with disable_api_termination:
~ module.data_nodes.postgresql.aws_instance.postgresql
disable_api_termination: "true" => "1"
One of the reasons it happens is mapstructure library. It converts true/false to 1/0: https://github.com/mitchellh/mapstructure/blob/d2dd0262208475919e1a362f675cfc0e7c10e905/mapstructure.go#L61
This is making terraform really hard to use, made a lot worse because of the way that the dependencies work in terraform.
Given:
resource "aws_instance" "worker" {
count = "${var.worker_count}"
source_dest_check = false
...
}
resource "aws_route53_record" "worker" {
count = "${var.worker_count}"
name = "${format("host-%02d", count.index + 1)}.mydomain.com."
records = ["${element(aws_instance.worker.*.private_ip, count.index)}"]
..
}
The way the deps in terraform work here using element() means that all instances of aws_instance.worker are considered dependencies of any aws_route53_record.worker instance, even though actually only aws_instance.worker[i] is really a dependency. Because of the integer casting bug, we're unable to use aws_route53_record at all because trying to use any instance of that with -target also then tries to rebuild every aws_instance_worker due to the integer casting bug.
To demonstrate further:
$ terraform plan -target=aws_route53_record.worker[3]
Refreshing Terraform state prior to plan...
...
~ aws_instance.worker.0
source_dest_check: "false" => "0"
~ aws_instance.worker.1
source_dest_check: "false" => "0"
~ aws_instance.worker.2
source_dest_check: "false" => "0"
+ aws_route53_record.worker.3
fqdn: "" => "<computed>"
name: "" => "host-04.mydomain.com."
records.#: "" => "<computed>"
ttl: "" => "60"
type: "" => "A"
zone_id: "" => "*"
Plan: 4 to add, 0 to change, 3 to destroy.
This one is affecting us pretty heavily. We want to be able to ignore certain changes on ec2 instances that would otherwise cause a new resource and have the flexibility to manually taint them at our leisure at a later time.
Hey folks, apologies for all the trouble here! This was a tricky one to run down since I didn't originally realize that ignore_changes is key to reproduction here. It looks like a boolean casting but when really it is an ignore_changes bug.
In order to reproduce, we need a config that:
ignore_changes applied to another attribute that forces replacementLike this:
resource "aws_instance" "worker" {
instance_type = "t2.micro"
ami = "ami-534d5d32"
source_dest_check = false
lifecycle {
ignore_changes = ["instance_type"]
}
}
After applying that, if we change the instance_type to e.g. t2.nano, we'll get confusing looking plans like the above reports:
~ aws_instance.worker
source_dest_check: "false" => "0"
And errors in applies:
* aws_instance.worker: diffs didn't match during apply
...
Resource ID: aws_instance.worker
Mismatch reason: diff: Destroy; old: false, new: true
I'm digging in to a fix for this now!
Everybody who reported issues: let me know if this sounds like it could be true for your error case. If you feel like you have a repro that does not involve the above scenario - see if you can get a consistent set of steps for it and let me know!
@phinze This is exactly my case as reported in #6047, except I was changing AMI instead of instance type.
@phinze Yeah, I'm using ignore_changes in #6152 aswell.
Same for me, using ignore_changes with user_data
Yep, we're using ignore_changes also.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hey folks, apologies for all the trouble here! This was a tricky one to run down since I didn't originally realize that
ignore_changesis key to reproduction here. It looks like a boolean casting but when really it is anignore_changesbug.In order to reproduce, we need a config that:
ignore_changesapplied to another attribute that forces replacementLike this:
After applying that, if we change the
instance_typeto e.g.t2.nano, we'll get confusing looking plans like the above reports:And errors in applies:
I'm digging in to a fix for this now!
Everybody who reported issues: let me know if this sounds like it could be true for your error case. If you feel like you have a repro that does not involve the above scenario - see if you can get a consistent set of steps for it and let me know!