Hi,
after I upgraded terraform from 0.6.9 to 0.6.11 I no longer can manage my infrastructure.
After downgrading back to 0.6.9 it works fine.
I get this error:
* aws_launch_configuration.ppa_lc_spot1: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
Here is my config:
resource "aws_launch_configuration" "ppa_lc_spot1" {
image_id = "${var.ppa_ami}"
instance_type = "${var.ppa_instance_type}"
name_prefix = "spot - ${var.ppa_instance_type} - "
key_name = "${var.ppa_key_pair}"
spot_price = 0.08
security_groups = ["${aws_security_group.ppa_sg.id}"]
iam_instance_profile = "${aws_iam_instance_profile.ppa_iam_profile.id}"
enable_monitoring = false
user_data = "${file("pricepanel_agent_boot.sh")}"
ebs_block_device {
device_name = "/dev/sda1"
volume_type = "gp2"
volume_size = 12
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
}
I ran into the same problem too when I tried to destroy the infrastructure.
Error refreshing state: 1 error(s) occurred:
* aws_launch_configuration.sample: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
Error waiting for Cmd exit status 1
Same problem, using version 0.6.12. Related to #4481 ?
Error refreshing state: 1 error(s) occurred:
* aws_launch_configuration.elasticsearch: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
Having the same issue. I don't have a roo_block_device specified anywhere. Just a ebs_block_device.
Same problem here with the 0.6.15, it works with the 0.6.9
FYI: I created one resource with the 0.6.9, then I updated to the 0.6.15, then I added two resources more and I got this error but only in the new resources.
Error refreshing state: 2 error(s) occurred:
* aws_launch_configuration.waypoint-ie: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
* aws_launch_configuration.waypoint-va: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
The three have the same definition.
ebs_block_device = {
device_name = "/dev/sda1"
volume_size = "30"
volume_type = "gp2"
delete_on_termination = false
}
Same, here. Now I can't do a terraform destroy and have to manually delete stuff - uhg
Error refreshing state: 1 error(s) occurred:
Edit: I edited the terraform.tfstate file and removed the aws_launch_configuration.admin then I was able to do a 'terraform destroy'
I know why this is happening, but I don't have a fix I'm comfortable with yet.
Based on this: https://aws.amazon.com/blogs/aws/new-encrypted-ebs-boot-volumes/
and this: http://docs.aws.amazon.com/sdk-for-go/api/service/autoscaling.html#type-Ebs
it is clear that root devices _can_ be encrypted. So, the API is returning "encrypted = false" on these devices. When readBlockDevicesFromLaunchConfiguration parses the EBS structures, it sees that flag, and sets it, even on the root device.
However, when readLCBlockDevices tries to put the resulting structures into the Schema Set, it chokes because the "root_block_device" schema has no "encrypted" attribute.
Thus: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
So, my initial workaround was simply to add that attribute, borrowing it from the "ebs_volume" schema. That works nicely to prevent the error (hint, hint), but also allows the encrypted attribute to be specified for a root_block_device in the config file. That's not really legal, since the encrypted status of a boot device should be set based on the snapshot it's using, not a flag.
In fact, based on various issues and PRs: #5360 #6428 , and the docs stating:
encrypted - (Optional) Whether the volume should be encrypted or not. Do not use this option if you are using snapshot_id as the encrypted flag will be determined by the snapshot. (Default: false).
I think that the use of the encrypted flag is pretty messy right now. There should be an either/or with snapshot-id for non-root ebs volume creation, and root volumes should respect the encrypted attribute but not be allowed to set it in Terraform configs.
I need to look back through the tricks of Schema to see if I can write that, or someone closer to the project may be able to do it more quickly. I'll see what I can do.
Hmm, looks like resource_aws_instance handles this completely differently, by shortcutting out of the structure read on the root device.
In readBlockDevicesFromInstance, resource_aws_instance skips these for root devices:
Just ran into this issue myself. Is there a workaround? Adding an encryption variable still doesn't fix it. This config:
ebs_block_device = {
device_name = "/dev/xvda"
volume_size = 8
volume_type = "gp2"
delete_on_termination = true
encrypted = false
}
Still outputs:
* aws_launch_configuration.launch_router: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
I'm not quite getting the 'hint/hint'.
Sorry, the hint was about the internal implementation. There is no workaround yet without patching TF source, because it's a state parsing problem in Terraform. I believe the bug is tickled by using "ebs_block_device" to manage your root device. Once that's set with "encrypted = false" in the state, you're going to get the error.
It _may_ be possible to change that to "root_block_device" and hack the state file to match, but I haven't tried it. Back the state up first, and dig in there at your own risk. I'm going to work on a proper PR right now, mimicking the behavior of resource_aws_instance, which doesn't suffer from this problem in parsing EBS volume state.
However, thinking about this over the last day or so makes me wonder about the "encrypted" keyword, and whether it's really doing what it's supposed to. I'll run some practical tests as well.
(as an aside, I'm not speaking for or affiliated with Hashicorp. Just volunteering here, as we've seen this bug as well.)
That PR will prevent the problem seen here, and prompt the user to fix an incorrect root_block_device declared as an ebs_block_device. (Only during an apply, I'm afraid. It's not caught in a plan)
There are some other safety checks I'd like to add, but implementation issues were getting in my way.
Hey all – I'm trying to reproduce this problem but am having trouble, I'm trying to figure out if a patch is still needed.
I tried creating a Launch Configuration with very simple parameters in v0.6.09 and then upgrading to v0.6.16 and had no issue. Here's my config:
resource "aws_launch_configuration" "foobar" {
image_id = "ami-dfc39aef"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
ebs_block_device = {
device_name = "/dev/sda1"
volume_size = "20"
volume_type = "gp2"
delete_on_termination = false
}
}
I created it v0.6.09 and then upgraded to the latest release, and plan, apply, taint, destroy and create all worked as expected. If this is still a problem, can you help me reproduce it?
I was able to reproduce it by running with 0.6.15, then it persisted with 0.6.16.
Like you, I wasn't able to see it now with clean state in 0.6.16. I suspect that jumping from 0.6.9 to 0.6.16 never gets an incorrect "Encrypted" field in the state.
I think that #6452 partially addressed this issue in 0.6.16.
However, even on 0.6.16 I still see needless rebuilds initiated on the launch config.
Here's my config (us-west-2):
resource "aws_launch_configuration" "tflc" {
image_id = "ami-c229c0a2"
instance_type = "t2.micro"
ebs_block_device {
device_name = "/dev/xvda"
volume_type = "gp2"
volume_size = 25
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "pf-bugs" {
availability_zones = ["us-west-2a"]
# this corresponds to a subnet in my VPC. Change/delete as necessary
vpc_zone_identifier = [ "subnet-12345678"]
name = "bugs-n-stuff"
max_size = 1
min_size = 1
health_check_grace_period = 300
desired_capacity = 1
force_delete = true
launch_configuration = "${aws_launch_configuration.tflc.name}"
}
So, on an all 0.6.16 run I get a creation correctly, but the subsequent plan (no changes) says:
~ aws_autoscaling_group.pf-bugs
launch_configuration: "bugscwlh5cr7g5gd7dq6f6bqt2h6by" => "${aws_launch_configuration.tflc.name}"
-/+ aws_launch_configuration.tflc
associate_public_ip_address: "false" => "false"
ebs_block_device.#: "0" => "1"
ebs_block_device.3935708772.delete_on_termination: "" => "true" (forces new resource)
ebs_block_device.3935708772.device_name: "" => "/dev/xvda" (forces new resource)
ebs_block_device.3935708772.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.3935708772.iops: "" => "<computed>" (forces new resource)
ebs_block_device.3935708772.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.3935708772.volume_size: "" => "25" (forces new resource)
ebs_block_device.3935708772.volume_type: "" => "gp2" (forces new resource)
ebs_optimized: "false" => "<computed>"
enable_monitoring: "true" => "true"
image_id: "ami-c229c0a2" => "ami-c229c0a2"
instance_type: "t2.micro" => "t2.micro"
key_name: "paul-forman" => "paul-forman"
name: "bugscwlh5cr7g5gd7dq6f6bqt2h6by" => "<computed>"
name_prefix: "bugs" => "bugs"
root_block_device.#: "1" => "<computed>"
security_groups.#: "1" => "1"
security_groups.2034181980: "sg-8ed5cbea" => "sg-8ed5cbea"
Since the ebs_block_device and root_block_device are getting crossed up, it's forcing a new resource for no reason.
With 0.6.15, that config fails to instantiate at all:
...
aws_launch_configuration.tflc: Still creating... (20s elapsed)
aws_launch_configuration.tflc: Still creating... (30s elapsed)
Error applying plan:
1 error(s) occurred:
* aws_launch_configuration.tflc: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
Using the partial state from that apply with 0.6.16 also fails:
<paul.forman@pforman(local):~/tmp/tf-test> terraform --version
Terraform v0.6.16
<paul.forman@pforman(local):~/tmp/tf-test> terraform plan
Refreshing Terraform state prior to plan...
aws_launch_configuration.tflc: Refreshing state... (ID: bugsiucohnm2nvgnna4wghjnixy36u)
Error refreshing state: 1 error(s) occurred:
* aws_launch_configuration.tflc: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
Even a destroy fails (0.6.16 again):
<paul.forman@pforman(local):~/tmp/tf-test> terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_launch_configuration.tflc: Refreshing state... (ID: bugsiucohnm2nvgnna4wghjnixy36u)
Error refreshing state: 1 error(s) occurred:
* aws_launch_configuration.tflc: Invalid address to set: []string{"root_block_device", "0", "encrypted"}
Using my patch in #6512 I can destroy the existing resources without an "address to set" error, as it can read back and ignore the Encrypted field. It also prevents the resource rebuilds by disallowing using an ebs_block_device definition for the root device.
Thanks for the attention to this issue. I'm happy to help test, identify, and even rework that PR if needed.
Hey all – @pforman patched this in #6512 and I just merged it! Thanks for reporting
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hey all – @pforman patched this in #6512 and I just merged it! Thanks for reporting