Terraform version 0.7.1
During refactoring to latest Terraform version I found that every time I run "terraform plan" on refactored configs with new "vpc_security_group_ids" parameter instead "security_group" I get my resources to be changed even if they have already changed to use group id's:
~ module.composition_ec2_ebs_s3_route53.aws_instance.aws_instance.nodes.0
vpc_security_group_ids.#: "0" => "2"
vpc_security_group_ids.4007540670: "" => "sg-xxxxxxxx"
vpc_security_group_ids.4178067539: "" => "sg-xxxxxxxx"
~ module.composition_ec2_ebs_s3_route53.aws_instance.aws_instance.nodes.1
vpc_security_group_ids.#: "0" => "2"
vpc_security_group_ids.4007540670: "" => "sg-xxxxxxxx"
vpc_security_group_ids.4178067539: "" => "sg-xxxxxxxx"
~ module.composition_ec2_ebs_s3_route53.aws_instance.aws_instance.nodes.2
vpc_security_group_ids.#: "0" => "2"
vpc_security_group_ids.4007540670: "" => "sg-xxxxxxxx"
vpc_security_group_ids.4178067539: "" => "sg-xxxxxxxx"
Hello –
I have a few questions that can help me narrow this down:
names of the groups in your configuration, instead of the ids? Thanks!
@catsby hey)
yes, the reason I need to refactor that I can't use security groups with names since I'm on a default VPC and it's thrown me an errors, so I refactored to use "vpc_security_group_ids" instead with security group ids. Will paste some part of code.
So, I have variable security_group = "sg-xxxxxxxx,${aws_security_group.security_group.id}"
variable "ami" {}
variable "availability_zones" {}
variable "count" {}
variable "iam_instance_profiles" {}
variable "instance_type" {}
variable "key_name" {}
variable "names" {}
variable "security_group" {}
variable "ssh_user" {}
variable "volume_attach_scripts" {}
resource "aws_instance" "nodes" {
ami = "${var.ami}"
availability_zone = "${element(split(",", var.availability_zones), count.index)}"
count = "${var.count}"
iam_instance_profile = "${element(split(",", var.iam_instance_profiles), count.index)}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${split(",", var.security_group)}"]
tags {
Name = "${element(split(",", var.names), count.index)}"
sshUser = "${var.ssh_user}"
}
user_data = "${element(split(",", var.volume_attach_scripts), count.index)}"
}
output "instance_ids" {
value = "${join(",", aws_instance.nodes.*.id)}"
}
output "private_dns" {
value = "${join(",", aws_instance.nodes.*.private_dns)}"
}
output "private_ips" {
value = "${join(",", aws_instance.nodes.*.private_ip)}"
}
output "public_dns" {
value = "${join(",", aws_instance.nodes.*.public_dns)}"
}
output "public_ips" {
value = "${join(",", aws_instance.nodes.*.public_ip)}"
}
If you run your plan with the TF_LOG=1 env set, does that give any clues?
$ TF_LOG=1 terraform plan
Confusingly enough, I believe using security_groups when using the Default VPC is ok, I think. I wasn't expecting using vpc_security_group_ids to be a problem though
NO, debug seems to show nothing, and I can't use security groups with VPC I believe due to backward incompatibility in CHANGELOG to 0.7.0, and what I admitted is that if I use vpc_security _group_ids and then check "terraform show" there are 2 security groups with names but there aren't any vpc_security _group_ids:
~ module.composition_ec2_ebs.aws_instance.aws_instance.nodes
vpc_security_group_ids.#: "0" => "2"
vpc_security_group_ids.xxxxxxxxxx: "" => "sg-xxxxxxxx"
vpc_security_group_ids.xxxxxxxxxx: "" => "sg-xxxxxxxx"
Plan: 0 to add, 1 to change, 0 to destroy.
terraform apply:
vpc_security_group_ids.#: "0" => "2"
vpc_security_group_ids.xxxxxxxxxxx: "" => "sg-xxxxxxxx"
vpc_security_group_ids.xxxxxxxxxxx: "" => "sg-xxxxxxxx"
module.composition_ec2_ebs.aws_instance.aws_instance.nodes: Modifications complete
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
terraform plan:
~ module.composition_ec2_ebs.aws_instance.aws_instance.nodes
vpc_security_group_ids.#: "0" => "2"
vpc_security_group_ids.xxxxxxxxxxx: "" => "sg-xxxxxxxx"
vpc_security_group_ids.xxxxxxxxxxx: "" => "sg-xxxxxxxx"
Plan: 0 to add, 1 to change, 0 to destroy.
terraform show:
module.composition_ec2_ebs.aws_instance.aws_instance.nodes:
id = xxxxxxxxx
ami = xxxxxxxxx
availability_zone = eu-west-1a
disable_api_termination = false
ebs_block_device.# = 1
ebs_block_device.xxxxxxxxxxxx.delete_on_termination = false
ebs_block_device.xxxxxxxxxxxx.device_name = /dev/xvdf
ebs_block_device.xxxxxxxxxxxx.encrypted = false
ebs_block_device.xxxxxxxxxxxx.iops = 150
ebs_block_device.xxxxxxxxxxxx.snapshot_id =
ebs_block_device.xxxxxxxxxxxx.volume_size = 50
ebs_block_device.xxxxxxxxxxxx.volume_type = gp2
ebs_optimized = false
ephemeral_block_device.# = 0
iam_instance_profile = gitlab-runner
instance_state = running
instance_type = t2.small
key_name = general
monitoring = false
network_interface_id = eni-xxxxxxxx
private_dns = xxxxxxxxxxxxxxx
private_ip = xxxxxxxxxxx
public_dns = xxxxxxxxxxxx
public_ip = xxxxxxxxxx
root_block_device.# = 1
root_block_device.0.delete_on_termination = true
root_block_device.0.iops = 100
root_block_device.0.volume_size = 8
root_block_device.0.volume_type = gp2
security_groups.# = 2
security_groups.xxxxxxxxxxxx = gitlab-runner
security_groups.xxxxxxxxxxxx = default
source_dest_check = true
subnet_id = subnet-xxxxxxxx
tags.% = 2
tags.Name =
tags.sshUser = ec2-user
tenancy = default
user_data =
vpc_security_group_ids.# = 0
any updates?
Hey @antimack I apologize for the silence here.
I'm taking another look now and I'm not yet seeing how this is happening. I'm curious if this is just an issue with migrating from one format security_groups to the new vpc_security_groups, though I don't believe we've heard of this yet.
If you're still having this issue I would recommend making a backup of your statefile, and then removing this section:
security_groups.# = 2
security_groups.xxxxxxxxxxxx = gitlab-runner
security_groups.xxxxxxxxxxxx = default
And then doing a refresh with $ terraform refresh <path to configuration>. After that, the state should be updated with vpc_security_group_ids information. Is it possible for you to try this?
I apologize again for the long silence here. Please let me know if this is still troubling you.
Thank you, will try after my vacation!)
I'm going to close this issue for now then. Please let me know if you're still hitting it after returning from vacation! Enjoy 😄
I'm having the same issue with v0.10.7.
I tried your suggestion of deleting the security_groups.* entries and refreshing but they're simply recreated again. I also tried renaming every security_groups.* to vpc_security_group_ids.* but they're also reverted after a refresh.
@frosas same here with v0.10.7.
Still facing this issue. v0.10.7.
Please reopen this issue. @catsby
Wrong repo. Also, it's been reported multiple times on the correct one:
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Still facing this issue.
v0.10.7.Please reopen this issue. @catsby