_This issue was originally opened by @notuscloud as hashicorp/terraform#15659. It was migrated here as a result of the provider split. The original body of the issue is below._
v0.9.11
# EC2 instance
module "elk" {
source = "../commons/aws/ec2/instance"
ami = "${var.aws_default_ami}"
type = "${var.elk_flavor}"
key_name = "${var.aws_default_keyname}"
subnet_id = "${data.aws_subnet.logs.id}"
associate_public_ip_address = true
availability_zone = "${var.availability_zone}"
vpc_security_group_ids = ["${data.aws_security_group.logs_main_sg.id}"]
docker_disk_size = 16
tags {
Name = "elk"
env = "${var.env}"
stack = "${var.stack}"
}
}
# EBS Volume attachment
resource "aws_volume_attachment" "elk_ebs" {
device_name = "/dev/sdc"
volume_id = "${data.aws_ebs_volume.elk_data.id}"
instance_id = "${module.elk.id}"
force_detach = true
}
-/+ aws_volume_attachment.elk_ebs
device_name: "/dev/sdc" => "/dev/sdc"
force_detach: "true" => "true"
instance_id: "i-someid" => "${module.elk.id}" (forces new resource)
skip_destroy: "" => "<computed>"
volume_id: "vol-someid" => "vol-someid"
-/+ module.elk.aws_instance.instance
ami: "ami-fa2df395" => "ami-fa2df395"
associate_public_ip_address: "true" => "true"
availability_zone: "eu-central-1a" => "eu-central-1a"
ebs_block_device.#: "2" => "1"
ebs_block_device.2554893574.delete_on_termination: "false" => "false"
ebs_block_device.2554893574.device_name: "/dev/sdc" => "" (forces new resource)
ebs_block_device.2576023345.delete_on_termination: "true" => "true"
ebs_block_device.2576023345.device_name: "/dev/sdb" => "/dev/sdb"
ebs_block_device.2576023345.encrypted: "false" => "<computed>"
ebs_block_device.2576023345.iops: "0" => "<computed>"
ebs_block_device.2576023345.snapshot_id: "" => "<computed>"
ebs_block_device.2576023345.volume_size: "16" => "16"
ebs_block_device.2576023345.volume_type:
aws_volume_attachment
should not be forced into a new resource, thus the drive letter should not change for the instance. As nothing changes for the attachment and aws_instance, nothing should happen.
aws_volume_attachment
and force a new resourceaws_instance
aws_volume_attachment
being forced anew, it provokes a device_name change to the aws_instance
.device_name
forces a new resource with the attached aws_instance
To resume, I do no changes and both aws_volume_attachment
and aws_instance
are being deleted then re-created.
terraform apply
terraform plan
to see what's gonna happen or terraform apply
Thanks for your help,
Thomas
Terraform version v0.9.11
Found the same behaviour. Example code and output below.
resource "aws_instance" "mongo" {
depends_on = [ "aws_iam_instance_profile.ec2_default" ]
ami = "${var.ami}"
instance_type = "${var.instance_type}"
associate_public_ip_address = "${var.associate_public_ip_address}"
iam_instance_profile = "${aws_iam_instance_profile.ec2_default.name}"
# TODO: need to spread this across AZs OR use ASG to do it for us
subnet_id = "${var.subnets[0]}"
vpc_security_group_ids = [
"${aws_security_group.allow_ssh_ip.id}",
"${aws_security_group.mongo.id}",
]
key_name = "${var.ssh_key}"
# root device size 20GB
root_block_device {
volume_type = "standard"
volume_size = 20
}
# MongoDB storage disk:
# /var/lib/mongodb
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html
ebs_block_device {
device_name = "/dev/xvdh"
volume_size = "${var.additional_disk_size}"
volume_type = "gp2"
delete_on_termination = true
# volume_type = "io1"
# iops = 3000
}
user_data = "${data.template_file.init.rendered}"
tags {
Name = "${var.name}-01 ${var.environment} ${var.account}"
environment = "${var.environment}"
Number = "0"
}
}
resource "aws_volume_attachment" "ebs_backup_attachment" {
depends_on = ["aws_instance.mongo"]
device_name = "/dev/xvdb"
volume_id = "${aws_ebs_volume.backup_disk01.id}"
instance_id = "${aws_instance.mongo.id}"
}
Terraform plan output:
-/+ module.mongodb_feed_shaper.aws_instance.mongo
ami: "ami-402f1a33" => "ami-402f1a33"
associate_public_ip_address: "true" => "true"
availability_zone: "eu-west-1a" => "<computed>"
disable_api_termination: "true" => "false"
ebs_block_device.#: "2" => "1"
ebs_block_device.3846643179.delete_on_termination: "true" => "true"
ebs_block_device.3846643179.device_name: "/dev/xvdh" => "/dev/xvdh"
ebs_block_device.3846643179.encrypted: "false" => "<computed>"
ebs_block_device.3846643179.iops: "240" => "<computed>"
ebs_block_device.3846643179.snapshot_id: "" => "<computed>"
ebs_block_device.3846643179.volume_size: "80" => "80"
ebs_block_device.3846643179.volume_type: "gp2" => "gp2"
ebs_block_device.3905984573.delete_on_termination: "false" => "false"
ebs_block_device.3905984573.device_name: "/dev/xvdb" => "" (forces new resource)
ephemeral_block_device.#: "0" => "<computed>"
iam_instance_profile: "mongodb-prod-cobra-ec2-profile" => "mongodb-prod-cobra-ec2-profile"
instance_state: "running" => "<computed>"
instance_type: "m4.large" => "m4.large"
ipv6_address_count: "0" => "<computed>"
ipv6_addresses.#: "0" => "<computed>"
key_name: "common_ssh_key" => "common_ssh_key"
network_interface.#: "0" => "<computed>"
network_interface_id: "eni-ab43f5f8" => "<computed>"
placement_group: "" => "<computed>"
primary_network_interface_id: "eni-ab43f5f8" => "<computed>"
private_dns: "ip-10-100-0-24.eu-west-1.compute.internal" => "<computed>"
private_ip: "10.100.0.24" => "<computed>"
[snip sensitive output]
root_block_device.#: "1" => "1"
root_block_device.0.delete_on_termination: "true" => "true"
root_block_device.0.iops: "0" => "<computed>"
root_block_device.0.volume_size: "20" => "20"
root_block_device.0.volume_type: "standard" => "standard"
security_groups.#: "0" => "<computed>"
source_dest_check: "true" => "true"
subnet_id: "subnet-21735679" => "subnet-21735679"
tags.%: "3" => "3"
tags.Name: "mongodb-01 prod cobra" => "mongodb-01 prod cobra"
tags.Number: "0" => "0"
tags.environment: "prod" => "prod"
tenancy: "default" => "<computed>"
user_data: "0c6737987ffb1a0721c6867491ab21e6ffdddb71" => "0c6737987ffb1a0721c6867491ab21e6ffdddb71"
volume_tags.%: "1" => "<computed>"
vpc_security_group_ids.#: "2" => "2"
vpc_security_group_ids.2549435434: "sg-2d844454" => "sg-2d844454"
vpc_security_group_ids.626344498: "sg-2c844455" => "sg-2c844455"
-/+ module.mongodb_feed_shaper.aws_volume_attachment.ebs_backup_attachment
device_name: "/dev/xvdb" => "/dev/xvdb"
force_detach: "" => "<computed>"
instance_id: "i-016d60576a336e65d" => "${aws_instance.mongo.id}" (forces new resource)
skip_destroy: "" => "<computed>"
volume_id: "vol-06ec93f696b689882" => "vol-06ec93f696b689882"
Additionally, when I've changed instance_id from variable (computed) to static instance ID on volume_attachment resource, forcing of a new aws_instance remained:
ebs_block_device.3905984573.device_name: "/dev/xvdb" => "" (forces new resource)
Hope that helps.
Following workaround worked for me: add to aws_instance
resource:
lifecycle {
ignore_changes = [ "ebs_block_device" ]
}
Hi @gstlt,
I'm gonna check on that, but I'm afraid that will introduce a new issue. If my EBS is detached/attached because of this bug, it may result in I/O errors as I use those volumes to store data for my stateful stack.
Regards,
Thanks @notuscloud
I see. Maybe then you should add ignore changes to the AWS instance on EBS attachment too, as seen here
For me change on the instance resource was enough.
Good luck!
Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.
If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Following workaround worked for me: add to
aws_instance
resource: