Running v0.6.11, I noticed that instances with an ebs_block_device are recreated every time I run terraform apply, even if there were no relevant changes. The block looks like this:
ebs_block_device {
device_name = "${var.es_ebs_vol.device_name}"
volume_type = "${var.es_ebs_vol.type}"
volume_size = "${var.es_ebs_vol.size}"
}
This seems similar to #913 but that was resolved some time ago. Any ideas?
+1
Having the same issue as @bwhaley on 0.6.11. If it helps, I had the issue on 0.6.9 as well. I was hoping that an upgrade to 0.6.11 would fix it but it has not :(
I'm trying to figure this out too in #4786.
Does your AMI specify an EBS snapshot to mount as a root device? This is my problem with trying to launch and ECS container.
Yes - but isn't the root volume for all EBS-backed AMIs started from an EBS snapshot?
@bwhaley I guess that's the case.
So this is only happening on my t2 based instances that happen to be running Amazon Linux. This might just be a coincidence.
Interesting - in my case it's also Amazon Linux on T2 instance types.
*** Forget that need to use "root_block_device" my mistake
+1
Seeing the same issue here across more than just t2 instances.
ebs_block_device {
device_name = "/dev/xvda"
volume_size = 128
}
Same behaviour on 0.6.9 thru 0.6.12, possible change on the AWS side?
+1 I'm hitting this with 0.6.12
Launching an AMI that's a t2.micro with 2 EBS volumes, created using Packer
I specify a "root_block_device" and "data_block_device" mapping in the terraform template.
It's causing it to mark the data volume as needing to be re-created every time.
I am running into a similar issue. In the plan it keeps pointing to the following.
Couple of observations: I am not using iops. Yet it seems to compute iops
I did not change the delete_on_terminate - yet it thinks there is a change
ebs_block_device.3796809015.delete_on_termination:* "true" => "1"* (forces new resource)
ebs_block_device.3796809015.device_name: "/dev/xvdl" => "/dev/xvdl" (forces new resource)
ebs_block_device.3796809015.encrypted: "false" => "
ebs_block_device.3796809015.iops: "150" => "
ebs_block_device.3796809015.snapshot_id: "" => "
ebs_block_device.3796809015.volume_size: "50" => "50" (forces new resource)
ebs_block_device.3796809015.volume_type: "gp2" => "gp2" (forces new resource)
I'm also hitting the same issue here:
> terraform plan output
...
ami: "ami-d3a04fbc" => "ami-d3a04fbc"
associate_public_ip_address: "false" => "false"
availability_zone: "eu-central-1a" => "<computed>"
ebs_block_device.#: "3" => "3"
ebs_block_device.1376874904.delete_on_termination: "true" => "false"
ebs_block_device.1376874904.device_name: "/dev/xvdf" => ""
ebs_block_device.1494882292.delete_on_termination: "true" => "false"
ebs_block_device.1494882292.device_name: "/dev/xvdh" => ""
ebs_block_device.1712666200.delete_on_termination: "true" => "false"
ebs_block_device.1712666200.device_name: "/dev/xvdg" => ""
ebs_block_device.3846643179.delete_on_termination: "" => "true" (forces new resource)
ebs_block_device.3846643179.device_name: "" => "/dev/xvdh" (forces new resource)
ebs_block_device.3846643179.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.3846643179.iops: "" => "100" (forces new resource)
ebs_block_device.3846643179.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.3846643179.volume_size: "" => "10" (forces new resource)
ebs_block_device.3846643179.volume_type: "" => "io1" (forces new resource)
ebs_block_device.3994770134.delete_on_termination: "" => "true" (forces new resource)
ebs_block_device.3994770134.device_name: "" => "/dev/xvdg" (forces new resource)
ebs_block_device.3994770134.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.3994770134.iops: "" => "250" (forces new resource)
ebs_block_device.3994770134.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.3994770134.volume_size: "" => "25" (forces new resource)
ebs_block_device.3994770134.volume_type: "" => "io1" (forces new resource)
ebs_block_device.4023988449.delete_on_termination: "" => "true" (forces new resource)
ebs_block_device.4023988449.device_name: "" => "/dev/xvdf" (forces new resource)
ebs_block_device.4023988449.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.4023988449.iops: "" => "2000" (forces new resource)
ebs_block_device.4023988449.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.4023988449.volume_size: "" => "800" (forces new resource)
ebs_block_device.4023988449.volume_type: "" => "io1" (forces new resource)
...
root_block_device.#: "1" => "<computed>"
security_groups.#: "0" => "<computed>"
Although it's already in the terraform.tfstate
"aws_instance.mongod": {
"type": "aws_instance",
"primary": {
"id": "i-32f24a8f",
"attributes": {
"ami": "ami-d3a04fbc",
"associate_public_ip_address": "false",
"availability_zone": "eu-central-1a",
"disable_api_termination": "false",
"ebs_block_device.#": "3",
"ebs_block_device.1376874904.delete_on_termination": "true",
"ebs_block_device.1376874904.device_name": "/dev/xvdf",
"ebs_block_device.1376874904.encrypted": "false",
"ebs_block_device.1376874904.iops": "2000",
"ebs_block_device.1376874904.snapshot_id": "snap-e726c30c",
"ebs_block_device.1376874904.volume_size": "800",
"ebs_block_device.1376874904.volume_type": "io1",
"ebs_block_device.1494882292.delete_on_termination": "true",
"ebs_block_device.1494882292.device_name": "/dev/xvdh",
"ebs_block_device.1494882292.encrypted": "false",
"ebs_block_device.1494882292.iops": "100",
"ebs_block_device.1494882292.snapshot_id": "snap-398501d2",
"ebs_block_device.1494882292.volume_size": "10",
"ebs_block_device.1494882292.volume_type": "io1",
"ebs_block_device.1712666200.delete_on_termination": "true",
"ebs_block_device.1712666200.device_name": "/dev/xvdg",
"ebs_block_device.1712666200.encrypted": "false",
"ebs_block_device.1712666200.iops": "250",
"ebs_block_device.1712666200.snapshot_id": "snap-b60e035e",
"ebs_block_device.1712666200.volume_size": "25",
"ebs_block_device.1712666200.volume_type": "io1",
"ebs_optimized": "false",
"ephemeral_block_device.#": "0",
...
Please tell me if you need more info about the issue.
I've solved my problem, just to inform you: I was building with AMIs that already have 3 ebs block devices. Then, in terraform, I was provisioning them with cloud-init. That's why in terraform plan output, there are actually 6 different ebs_block_device ids that were creating the trouble. So, on my side, the problem does not exist. Sorry for the confusion.
I'm having a similar issue. Mine seems to have something to do with the volumes being encrypted. When I run terraform apply, I get the following:
kms_key_id: "arn:aws:kms:us-east-1:<account_id>:key/<key_id>" => "<key_id (unchanged)>" (forces new resource)
As noted, the key did not change.
+1
+1 In terraform v 0.7 I am getting similar error. When encrypting devices in a aws_db_instance.
kms_key_id: "arn:aws:kms:us-west-2:
Key id is the same and have not changed.
Hmm so the issues can be resolved by using the arn instead of key_id
Forces new resource kms_key_id = "${aws_kms_key.key_name.key_id}"
Change it to kms_key_id = "${aws_kms_key.key_name.arn}". No new resource created.
Same here, terraform tried to re-create EC2 instance with additional EBS.
ebs_block_device.#: "1" => "1"
ebs_block_device.1399095401.delete_on_termination: "true" => "false"
ebs_block_device.1399095401.device_name: "/dev/sdb" => ""
ebs_block_device.2576023345.delete_on_termination: "" => "true" (forces new resource)
ebs_block_device.2576023345.device_name: "" => "/dev/sdb" (forces new resource)
ebs_block_device.2576023345.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.iops: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.volume_size: "" => "8" (forces new resource)
ebs_block_device.2576023345.volume_type: "" => "gp2" (forces new resource)
Inside terraform.tfstate:
"ebs_block_device.#": "1",
"ebs_block_device.1399095401.delete_on_termination": "true",
"ebs_block_device.1399095401.device_name": "/dev/sdb",
"ebs_block_device.1399095401.encrypted": "false",
"ebs_block_device.1399095401.iops": "100",
"ebs_block_device.1399095401.snapshot_id": "snap-9c8bf919",
"ebs_block_device.1399095401.volume_size": "8",
"ebs_block_device.1399095401.volume_type": "gp2",
It stops re-creating instance if I change my .tf file from:
ebs_block_device {
device_name = "/dev/sdb"
volume_type = "gp2"
volume_size = 8
delete_on_termination = true
}
to
ebs_block_device {
device_name = "/dev/sdb"
volume_type = "gp2"
volume_size = 8
delete_on_termination = true
snapshot_id = "snap-9c8bf919"
}
terraform 0.7.11 here. I use remote statefile (S3). When I run plan after apply, terraform reports:
ebs_block_device.#: "0" => "1"
ebs_block_device.3239300295.delete_on_termination: "" => "false" (forces new resource)
ebs_block_device.3239300295.device_name: "" => "/dev/sda1" (forces new resource)
ebs_block_device.3239300295.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.3239300295.iops: "" => "<computed>" (forces new resource)
ebs_block_device.3239300295.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.3239300295.volume_size: "" => "100" (forces new resource)
ebs_block_device.3239300295.volume_type: "" => "gp2" (forces new resource)
Statefile has the following for the instance:
"root_block_device.#": "1",
"root_block_device.0.delete_on_termination": "false",
"root_block_device.0.iops": "300",
"root_block_device.0.volume_size": "100",
"root_block_device.0.volume_type": "gp2",
config
ebs_block_device {
device_name = "/dev/sda1"
volume_type = "gp2"
volume_size = "${var.instance_volume_size}"
delete_on_termination = false
}
I'm also getting this with Terraform 0.7.13.
My instance definition looks like this:
resource "aws_instance" "app_instance" {
ami = "${data.aws_ami.ecs_optimized.id}"
instance_type = "${var.app_instance["instance_type"]}"
count = "${var.app_instance["instance_count"]}"
# Some storage
ebs_block_device {
device_name = "/dev/sdb"
volume_size = 50
volume_type = "gp2"
}
ebs_block_device {
device_name = "/dev/sdc"
volume_size = 50
volume_type = "gp2"
}
ebs_block_device {
device_name = "/dev/sdd"
volume_size = 50
volume_type = "gp2"
}
ebs_block_device {
device_name = "/dev/sde"
volume_size = 50
volume_type = "gp2"
}
}
I get this every time I run plan:
ebs_block_device.#: "5" => "4"
ebs_block_device.2554893574.delete_on_termination: "true" => "true" (forces new resource)
ebs_block_device.2554893574.device_name: "/dev/sdc" => "/dev/sdc" (forces new resource)
ebs_block_device.2554893574.encrypted: "false" => "<computed>"
ebs_block_device.2554893574.iops: "150" => "<computed>"
ebs_block_device.2554893574.snapshot_id: "" => "<computed>"
ebs_block_device.2554893574.volume_size: "50" => "50" (forces new resource)
ebs_block_device.2554893574.volume_type: "gp2" => "gp2" (forces new resource)
[Other three EBS blocks show the same messages.]
One explanation is that the AMI I'm using (Amazon ECS Optimized, ami-6df8fe7a), defines two block devices. The output from aws ec2 describe-images --image-id ami-6df8fe7a:
{
"Images": [
{
"VirtualizationType": "hvm",
"Name": "amzn-ami-2016.09.c-amazon-ecs-optimized",
"Hypervisor": "xen",
"ImageOwnerAlias": "amazon",
"EnaSupport": true,
"SriovNetSupport": "simple",
"ImageId": "ami-6df8fe7a",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true,
"SnapshotId": "snap-441eb8ad",
"VolumeSize": 8,
"VolumeType": "gp2",
"Encrypted": false
}
},
{
"DeviceName": "/dev/xvdcz",
"Ebs": {
"DeleteOnTermination": true,
"Encrypted": false,
"VolumeSize": 22,
"VolumeType": "gp2"
}
}
],
"Architecture": "x86_64",
"ImageLocation": "amazon/amzn-ami-2016.09.c-amazon-ecs-optimized",
"RootDeviceType": "ebs",
"OwnerId": "591542846629",
"RootDeviceName": "/dev/xvda",
"CreationDate": "2016-12-07T18:14:59.000Z",
"Public": true,
"ImageType": "machine",
"Description": "Amazon Linux AMI 2016.09.c x86_64 ECS HVM GP2"
}
]
}
The line ebs_block_device.#: "5" => "4" makes me think that one of the AMI-defined blocks is designed as the root block and that the other is considered a standard EBS block. But since that second block is not tracked in the configuration, every plan or apply sees only four managed blocks in my configuration and find five blocks exist in AWS. So it resets all the blocks to the configuration definition.
The workaround seems to be to define the EBS devices of the AMI (beyond the first) in my configuration, by adding this:
ebs_block_device {
device_name = "/dev/xvdcz"
volume_size = 22
volume_type = "gp2"
}
That fixes the issue, but I have to remember to update it if the AMI change. Less than ideal, unfortunately, since I grabbing the latest.
I'm trying to add the tags for existing EBS volumes that come with AMI. my terraform script is working. Now i need 2 things. How to avoid "not to remove EBS volume while terminating EC2 Node" and how to add TAGS for existing EBS volumes.
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
#Creation Of EC2 node from AMI
resource "aws_instance" "devdbsql01" {
ami = var.ec2_ami
instance_type = var.instance_types
key_name = var.key_name
availability_zone=var.availability_zone
disable_api_termination = "true"
#BEGIN Adding C Root Drive************************
root_block_device {
#availability_zone = var.availability_zone
volume_type = var.root_volume_types
volume_size = var.root_volume_sizes
delete_on_termination = var.delete_on_terminations
}
#END Adding C Root Drive****************************
#vpc_id = var.vpc_security_group_idss
subnet_id = var.subnet_ids #Adding existing Subnet
#security_group_ids = var.aws_security_group_name #Adding existing VPC
tags= var.tags
}
#Creation of Secuiry Group
#Security group for each server based on app – security group name example: shared-sqlserver-devdbsql01-dev-private-sg (appname-environment-public/private-sg)
resource "aws_security_group" "shared-sqlserver-devdbsql01-dev-private-sg" {
name = var.aws_security_group_name
description = var.aws_security_group_description
vpc_id = var.vpc_security_group_idss
ingress {
# TLS (change to whatever ports you need)
from_port = 0
to_port = 0
protocol = "-1"
# Please restrict your ingress to only necessary IPs and ports.
# Opening to 0.0.0.0/0 can lead to security vulnerabilities.
cidr_blocks = ["10.0.0.0/8"] # add your IP address here
}
tags= var.sgtags
}
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = aws_security_group.shared-sqlserver-devdbsql01-dev-private-sg.id
network_interface_id = aws_instance.devdbsql01.primary_network_interface_id
}
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I'm also getting this with Terraform 0.7.13.
My instance definition looks like this:
I get this every time I run plan:
One explanation is that the AMI I'm using (Amazon ECS Optimized, ami-6df8fe7a), defines two block devices. The output from
aws ec2 describe-images --image-id ami-6df8fe7a:The line
ebs_block_device.#: "5" => "4"makes me think that one of the AMI-defined blocks is designed as the root block and that the other is considered a standard EBS block. But since that second block is not tracked in the configuration, everyplanorapplysees only four managed blocks in my configuration and find five blocks exist in AWS. So it resets all the blocks to the configuration definition.The workaround seems to be to define the EBS devices of the AMI (beyond the first) in my configuration, by adding this:
That fixes the issue, but I have to remember to update it if the AMI change. Less than ideal, unfortunately, since I grabbing the latest.