I observed this bug in both versions 0.5.1 and 0.5.3. Example output in this bug is from 0.5.3.
Here's a security group I want to create:
provider "aws" {
region = "us-west-2"
}
resource "aws_vpc" "vpc-name-1" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = false
tags {
cloud-spec-name = "vpc-name-1"
}
}
resource "aws_security_group" "security-group-1" {
name = "security-group-1"
vpc_id = "${aws_vpc.vpc-name-1.id}"
ingress {
from_port = "111"
to_port = "111"
protocol = "tcp"
cidr_blocks = [
"10.30.0.0/16",
"10.20.0.0/16",
]
}
ingress {
from_port = "123"
to_port = "456"
protocol = "tcp"
cidr_blocks = [
"10.0.1.0/16",
"10.10.0.0/16",
]
}
egress {
from_port = "789"
to_port = "1011"
protocol = "udp"
cidr_blocks = [
"10.0.0.0/16",
]
}
tags {
cloud-spec-name = "security-group-1"
}
}
Terraform will create this security group no problem:
$ ../terraform-install/terraform apply
aws_vpc.vpc-name-1: Creating...
cidr_block: "" => "10.0.0.0/16"
default_network_acl_id: "" => "<computed>"
default_security_group_id: "" => "<computed>"
dhcp_options_id: "" => "<computed>"
enable_dns_hostnames: "" => "0"
enable_dns_support: "" => "1"
main_route_table_id: "" => "<computed>"
tags.#: "" => "1"
tags.cloud-spec-name: "" => "vpc-name-1"
aws_vpc.vpc-name-1: Creation complete
aws_security_group.security-group-1: Creating...
description: "" => "Managed by Terraform"
egress.#: "" => "1"
egress.2879914364.cidr_blocks.#: "" => "1"
egress.2879914364.cidr_blocks.0: "" => "10.0.0.0/16"
egress.2879914364.from_port: "" => "789"
egress.2879914364.protocol: "" => "udp"
egress.2879914364.security_groups.#: "" => "0"
egress.2879914364.self: "" => "0"
egress.2879914364.to_port: "" => "1011"
ingress.#: "" => "2"
ingress.1069758170.cidr_blocks.#: "" => "2"
ingress.1069758170.cidr_blocks.0: "" => "10.0.1.0/16"
ingress.1069758170.cidr_blocks.1: "" => "10.10.0.0/16"
ingress.1069758170.from_port: "" => "123"
ingress.1069758170.protocol: "" => "tcp"
ingress.1069758170.security_groups.#: "" => "0"
ingress.1069758170.self: "" => "0"
ingress.1069758170.to_port: "" => "456"
ingress.1765782756.cidr_blocks.#: "" => "2"
ingress.1765782756.cidr_blocks.0: "" => "10.30.0.0/16"
ingress.1765782756.cidr_blocks.1: "" => "10.20.0.0/16"
ingress.1765782756.from_port: "" => "111"
ingress.1765782756.protocol: "" => "tcp"
ingress.1765782756.security_groups.#: "" => "0"
ingress.1765782756.self: "" => "0"
ingress.1765782756.to_port: "" => "111"
name: "" => "security-group-1"
owner_id: "" => "<computed>"
tags.#: "" => "1"
tags.cloud-spec-name: "" => "security-group-1"
vpc_id: "" => "vpc-dbc24abe"
aws_security_group.security-group-1: Creation complete
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
However, it always incorrectly thinks that there is something about the security group it needs to update, even after performing multiple terraform apply commands.
$ ../terraform-install/terraform plan
Refreshing Terraform state prior to plan...
aws_vpc.vpc-name-1: Refreshing state... (ID: vpc-dbc24abe)
aws_security_group.security-group-1: Refreshing state... (ID: sg-38d7345c)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
~ aws_security_group.security-group-1
ingress.1069758170.cidr_blocks.#: "0" => "2"
ingress.1069758170.cidr_blocks.0: "" => "10.0.1.0/16"
ingress.1069758170.cidr_blocks.1: "" => "10.10.0.0/16"
ingress.1069758170.from_port: "" => "123"
ingress.1069758170.protocol: "" => "tcp"
ingress.1069758170.security_groups.#: "0" => "0"
ingress.1069758170.self: "" => "0"
ingress.1069758170.to_port: "" => "456"
ingress.1765782756.cidr_blocks.#: "2" => "2"
ingress.1765782756.cidr_blocks.0: "10.30.0.0/16" => "10.30.0.0/16"
ingress.1765782756.cidr_blocks.1: "10.20.0.0/16" => "10.20.0.0/16"
ingress.1765782756.from_port: "111" => "111"
ingress.1765782756.protocol: "tcp" => "tcp"
ingress.1765782756.security_groups.#: "0" => "0"
ingress.1765782756.self: "0" => "0"
ingress.1765782756.to_port: "111" => "111"
@catsby going to tag this as a provider issue - feel free to kick it to core if you decide it's not provider related.
Hey @kevinm416 –
If I read ipcalc correctly, the first cidr_block here in your second ingress rule is off:
ingress {
from_port = "123"
to_port = "456"
protocol = "tcp"
cidr_blocks = [
"10.0.1.0/16",
"10.10.0.0/16",
]
}
AWS is returning 10.0.1.0/16 as 10.0.0.0/16. The second ingress rule is the one triggering the issue, and that's what we're recording as the cidr.
maybe @phinze can sanity check me there
Ok, that makes sense. Thank you for the explanation.
I'm seeing this effect with 0.5.3 as well. We can reproduce this even if we only have 0.0.0.0/0 for cidr_blocks. So maybe there is another issue? /cc @catsby
@tisba do you have a sample configuration that demonstrates the issue?
I also added the terraform planning output to the gist.
Thanks @tisba – I'll take a look in a bit
Just walked into this issue as well. It would be a nice-to-have if terraform can warn about invalid network CIDR's and requires you to fix it before apply continues.
I'm getting this, but with Security Groups rather than CIDR, and while running Terraform on Ubuntu.
Hey folks, v0.6.12 had some significant improvements to security group handling - if you're seeing issues on that version can you file a fresh issue with steps to reproduce? Thanks!
Still happening for us on 0.6.12.
Hi @soulrebel can you file your scenario as a fresh issue? Then we can have a look and get your sorted! :+1:
(FWIW the first thing to check is mixing and matching of aws_security_group ingress/egress blocks and aws_security_group_rule resources for the same SG. As per the warning on the top of each resource's docs page these cannot be mixed without a perpetual diff resulting.)
For me this happens when I have a Security Group and Security Group Rules. I do not specify any inline rules. I'm using 0.6.15.
Still happening to me on 0.6.15.
There seems an regression with 0.6.15, the same configuration works with 0.6.14. To test this
with 0.6.14, it correctly indicate there is no change, with 0.5.15, terraform plan output
....
security_groups.#: "0" => "1" (forces new resource)
security_groups.879193022: "" => "sg-d360d3a8" (forces new resource)
my configuration is similar to the one in the original post, but with correct CIDR block. If need, I can clean up my configuration and give a simple gist/issue
Hello Friends –
This is a fairly old issue that seems to be the same, or closely match, a recent regression in v0.6.15 as @rzh points out. I detail this regression in this comment here:
We mistakenly broke a backwards compatibility with regards to Instances created in a VPC but using the security_groups configuration attribute, instead of vpc_security_group_ids. We allowed both, but the regression broke that and is causing the plan/diff that people are seeing in issues like #6416 .
We may release a maintenance release to reinstate this behavior. In the future, v0.7.0 will still contain this behavior and you'll need to upgrade your configuration to use the correct attribute. The master branch has this and the CHANGELOG has been updated to reflect this.
For now, changing your configuration to use vpc_security_group_ids, or downgrading to v0.6.14 is the best workarounds.
I sincerely apologize for the surprise and frustration caused by this. Please reply on #6416 if you have any questions here.
this issue has raised up its head in Terraform v0.10.0
vpc_security_group_ids resolved the issue.
Come to think of it, it's not really an issue; if instances are being created within a vpc then vpc_security_group_ids should be used. kind of makes logical sense, to me
also ran into this with 0.10.0
Same on v.0.10.6 using security groups, with vpc_security_group_ids it works correctly.
im getting the same with v0.10.7 and using vpc_security_group_ids
Same as @peterromfeldhk.
v0.10.7 - vpc_security_group_ids instead of security_groups didn't solve the issue.
Same here with v0.10.8 and using vpc_security_group_ids in place of security_groups.
Hi all! Sorry for these problems.
There is some subtlety here that is perhaps not captured properly in the documentation. This is an old issue from before the AWS provider moved to its own repository, so the maintainers of that provider aren't generally watching this repo anymore, but I believe I know what's going on here:
Terraform attempts to support both the so-called _EC2-Classic_ model and the VPC model, using security_groups for the former and vpc_security_group_ids for the latter. What is perhaps not so clear is how Terraform makes that distinction: any instance created in a VPC that is marked as a _default_ VPC is considered to be an EC2-Classic instance, which must use security_groups with a list of security group names.
Therefore it's important to check whether your target VPC (selected indirectly via subnet_id) is marked as being a default VPC. If it _is_, use security_groups. If it isn't, use vpc_security_group_ids.
If you still see problems with the above advice in mind, I'd recommend opening a fresh issue in the AWS provider repository, completing the steps in the issue template, since you may have found a new issue or a regression relative to the change described here. The Terraform team doesn't generally follow discussion in closed issues, so if you see behavior that seems counter to what's described in a closed issue it's generally better to open a new one to ensure that up-to-date reproduction steps can be captured (similar symptoms don't necessarily mean the same problem) and to make it more likely that maintainers will see it.
Terraform v0.11.2
+ provider.aws v1.28.0
Had this issue but switching from security_groups to vpc_security_group_ids resolved it.
the above still happens on v0.11.13 on Linux.
``
Had this issue but switching fromsecurity_groupstovpc_security_group_ids` resolved it.
This fixed it for me we well.
Terraform v0.11.13
+ provider.aws v2.8.0
+ provider.template v2.1.1
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Still happening for us on 0.6.12.