Using terraform 0.6.5
I am unable to get terraform to apply into a consistent state - that is , an apply followed by a plan always wants to blow away all my instance resources and start over. Every time. Despite the fact terraform apply says everything was successful. Obviously this renders terraform an unusable tool for this usecase
It seems the offensive part relates to each instance having an EBS block volume, forcing a new resource despite the fact pre and post are the same values. Note the force new resource sections below when doing a subsequent plan, after a successful apply
-/+ aws_instance.foo_2
...
associate_public_ip_address: "false" => "0"
...
ebs_block_device.#: "2" => "1"
ebs_block_device.158414664.delete_on_termination: "1" => "0"
ebs_block_device.158414664.device_name: "/dev/xvda" => ""
ebs_block_device.3905984573.delete_on_termination: "true" => "1" (forces new resource)
ebs_block_device.3905984573.device_name: "/dev/xvdb" => "/dev/xvdb" (forces new resource)
ebs_block_device.3905984573.encrypted: "false" => "<computed>"
ebs_block_device.3905984573.iops: "30" => "<computed>"
ebs_block_device.3905984573.snapshot_id: "" => "<computed>"
ebs_block_device.3905984573.volume_size: "10" => "10" (forces new resource)
ebs_block_device.3905984573.volume_type: "gp2" => "gp2" (forces new resource)
ephemeral_block_device.#: "0" => "<computed>"
...
As you can see from above, its pretty screwed, and terraform is now unusable.
This is my code for an instance (obviously a fair amount of variables in there)
resource "aws_instance" "foo_2" {
ami = "${var.foo.ami_image_2}"
availability_zone = "${var.foo.availability_zone_2}"
instance_type = "${var.foo.instance_type_2}"
key_name = "${var.foo.key_name_2}"
vpc_security_group_ids = ["${aws_security_group.base.id}",
"${aws_security_group.foo.id}"]
subnet_id = "${aws_subnet.foo_az2.id}"
associate_public_ip_address = false
source_dest_check = true
monitoring = true
iam_instance_profile = "${aws_iam_instance_profile.foo.name}"
count = "${var.foo.large_cluster_size}"
user_data = "${template_file.userdata_foo_2.rendered}"
ebs_block_device {
device_name = "/dev/xvdb"
volume_type = "gp2"
volume_size = "${var.foo.disk_size}"
delete_on_termination = "${var.foo.disk_termination}"
}
}
A couple of other things I've noticed -
You see the ebs block device id above? "ebs_block_device.3905984573" - well the ID is the same for all instance resources in the output, that strikes me as a bit odd.
After terraform applying all my instances, I commented out all the ebs_block_device settings in all the instance definitions, and then tried a plan. Now terraform plan says there are no changes.
It looks like EBS block devices are severely not working in terraform 0.6.5
It seems very similar to https://github.com/hashicorp/terraform/issues/1260 . We are also building with AMIs with packer.
Hey @gtmtech – are there any external volume attachments?
Hi, in fact following the advice in #1260, i managed to fix it.
the output of packer was a 2 vol ami. terraform cant handle that it seems, when adding another volume
(with terraform 0.6.6)
This also happens if you created an ami with an io1 ebs on /dev/sdb, and wish terraform to override the ebs_block_device to gp2 volume_type. The resulting instance is created properly, however the state is not idempotent so each apply tries to relaunch the instance.
This is still happening with 0.6.8:
ebs_block_device.#: "1" => "1"
ebs_block_device.1247340589.delete_on_termination: "1" => "0"
ebs_block_device.1247340589.device_name: "/dev/sdb" => ""
ebs_block_device.2576023345.delete_on_termination: "" => "1" (forces new resource)
ebs_block_device.2576023345.device_name: "" => "/dev/sdb" (forces new resource)
ebs_block_device.2576023345.encrypted: "" => "0" (forces new resource)
ebs_block_device.2576023345.iops: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.volume_size: "" => "80" (forces new resource)
ebs_block_device.2576023345.volume_type: "" => "gp2" (forces new resource)
The ebs_block_device config in question is:
ebs_block_device {
device_name = "/dev/sdb"
volume_type = "gp2"
delete_on_termination = 1
encrypted = false
volume_size = 80
}
while the ami launched is with io1-type block device for /dev/sdb
Running terraform 0.6.13 and still hitting this issue.
This happens for only some AMIs.
I am trying to use terraform in production and failing to do so because of this issue. I understand this was fixed before in #1260. Can someone help me fix this issue?
@phinze
Using terraform version : v0.6.14
resource "aws_instance" "test" {
count="${var.test_count}"
ami = "${var.test_ami}"
instance_type = "${var.test_instance_type}"
subnet_id = "${var.test_subnet_id}"
tenancy = "dedicated"
disable_api_termination= false
ebs_block_device {
device_name = "/dev/sda1"
delete_on_termination = true
volume_type = "gp2"
}
}
Terraform output everytime I run terraform plan
ebs_block_device.2613854568.delete_on_termination: "" => "1" (forces new resource)
Plan: 1 to add, 0 to change, 1 to destroy.
I get the same error with the below ebs_block_device config as well. Based on the suggested fix in #1260.
ebs_block_device {
device_name = "/dev/sda1"
delete_on_termination = true
volume_type = "gp2"
volume_size = 100
}
Any help will be appreciated.
Thanks
This is still a problem in 0.7.3
If you put the snapshot id from the ami on the ebs_block_device for the volume that has the same device name as the one in the ami, then it doesn't think it needs to change.
It's like terraform interprets us not specifying snapshot_id as if the block device does not have a snapshot applied, even though there is one in the ami.
@mitchellh, sorry to mention you specifically on this bug but it's been sitting here for many versions (originally in 0.6.5 and still a bug in 0.8.5, just tried it).
I was wondering, are there plans to make it so values not specifically set on ebs_block_device do not cause the aws_instance to need to be destroyed and created? For example: snapshot_id which comes from the AMI. If I have block device mappings specified in my packer build and those non-root volumes end up being part of the AMI, in order to have the instance not get destroyed every time I either have to put the snapshot_id in the ebs_block_device (which makes data.aws_ami kind of useless since I can't easily get the snapshot_id for a particular block device mapping from that data).
Or I have to use the workaround mentioned in #1260, which basically makes the ebs_block_device settings only happen when the resource is created:
lifecycle {
ignore_changes = ["ebs_block_device"]
}
That workaround is certainly better than nothing, but I'd rather have things like changes to disk types or volume sizes actually trigger the resource to be destroyed/created. I'm just not seeing a great solution here.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I am trying to use terraform in production and failing to do so because of this issue. I understand this was fixed before in #1260. Can someone help me fix this issue?
@phinze
Using terraform version : v0.6.14
Terraform output everytime I run terraform plan
I get the same error with the below ebs_block_device config as well. Based on the suggested fix in #1260.
Any help will be appreciated.
Thanks