Terraform v0.9.6
resource "azurerm_lb_nat_pool" "playout-lb-nat-pool-ssh" {
count = 4
resource_group_name = "${azurerm_resource_group.psqldemo-rg.name}"
name = "playout-ssh"
loadbalancer_id = "${azurerm_lb.playout-lb.id}"
protocol = "Tcp"
frontend_port_start = 20022
frontend_port_end = 20122
backend_port = 22
frontend_ip_configuration_name = "playout-lb-frontend"
}
resource "azurerm_lb_nat_pool" "playout-lb-nat-pool-rdp" {
count = 4
resource_group_name = "${azurerm_resource_group.psqldemo-rg.name}"
name = "playout-rdp"
loadbalancer_id = "${azurerm_lb.playout-lb.id}"
protocol = "Tcp"
frontend_port_start = 18000
frontend_port_end = 18100
backend_port = 3389
frontend_ip_configuration_name = "playout-lb-frontend"
}
I would expect 8 resources to add in Plan output summary.
Terraform reports in Plan output summary 16 to add. It always reports double the sum of count properties. E.g. if I set count = 5 and count = 4 in other resource, then it will report 18 to add.
Yep, it's definitely a regression of some kind. It's not just azure, it's the same for Google provider
Same for AWS provider :smile:
This is also the case with the Digital Ocean provider and the digitalocean_droplet and digitalocean_tag resources.
$ terraform --version
Terraform v0.9.6
If any of you are able to share the full output of the terraform plan command (with any sensitive info redacted), I think that would be helpful to determine what situations this arises in.
```$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
digitalocean_ssh_key.REDACTED: Refreshing state... (ID: 8599842)
digitalocean_ssh_key.REDACTED: Refreshing state... (ID: 3563737)
digitalocean_ssh_key.REDACTED: Refreshing state... (ID: 5753440)
digitalocean_ssh_key.REDACTED: Refreshing state... (ID: 7235205)
digitalocean_ssh_key.REDACTED: Refreshing state... (ID: 5111619)
digitalocean_tag.env: Refreshing state... (ID: env:test)
digitalocean_tag.name: Refreshing state... (ID: name:REDACTED)
digitalocean_tag.env: Refreshing state... (ID: env:dev)
digitalocean_tag.app: Refreshing state... (ID: app:REDACTED)
digitalocean_tag.name: Refreshing state... (ID: name:REDACTED)
digitalocean_tag.app: Refreshing state... (ID: app:REDACTED)
digitalocean_tag.app: Refreshing state... (ID: app:REDACTED)
digitalocean_tag.name: Refreshing state... (ID: name:REDACTED)
digitalocean_tag.env: Refreshing state... (ID: env:dev)
digitalocean_tag.app: Refreshing state... (ID: app:REDACTED)
digitalocean_tag.name: Refreshing state... (ID: name:REDACTED)
digitalocean_tag.env: Refreshing state... (ID: env:test)
digitalocean_droplet.droplet: Refreshing state... (ID: 47262134)
digitalocean_droplet.droplet: Refreshing state... (ID: 49786707)
digitalocean_droplet.droplet: Refreshing state... (ID: 48471360)
digitalocean_droplet.droplet: Refreshing state... (ID: 50610892)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
~ module.REDACTED.digitalocean_droplet.droplet
tags.0: "env:test" => "name:REDACTED"
tags.1: "name:REDACTED" => "app:REDACTED"
tags.2: "app:masternode" => "env:test"
module.test04.digitalocean_droplet.droplet
backups: "false"
disk: "
image: "ubuntu-17-04-x64"
ipv4_address: "
ipv4_address_private: "
ipv6: "true"
ipv6_address: "
ipv6_address_private: "
locked: "
name: "test04"
price_hourly: "
price_monthly: "
private_networking: "true"
region: "sfo2"
resize_disk: "true"
size: "1gb"
ssh_keys.#: "1"
ssh_keys.0: "8599842"
status: "
tags.#: "
user_data: "REDACTED"
vcpus: "
module.test04.digitalocean_tag.app
name: "app:REDACTED"
module.test04.digitalocean_tag.env
name: "env:test"
module.test04.digitalocean_tag.name
name: "name:test04"
Plan: 8 to add, 1 to change, 0 to destroy.
$ terraform --version
Terraform v0.9.6
$ uname -a
Darwin fornost 15.6.0 Darwin Kernel Version 15.6.0: Tue Apr 11 16:00:51 PDT 2017; root:xnu-3248.60.11.5.3~1/RELEASE_X86_64 x86_64 i386 MacBookPro11,5 Darwin
```
I am seeing this "double-counting the adds" issue too, starting with 0.9.6
Hi everyone! Sorry for this weird issue.
I was finally able to figure out how to repro this just now: it seems to occur only if at least one resource gets refreshed before making the plan, so it doesn't happen if you are creating resources from scratch in an empty state, which is what I'd been trying to do until now.
I'm going to dig in now and see if I can figure out what's going on.
Adding more info: full repro:
resource "aws_instance" "instance" {
count = 2
instance_type = "t2.micro"
ami = "some-ami-id"
tags = {
"type" = "terraform-test-instance"
}
}
terraform applyterraform planNew diff:
+ aws_instance.instance.2
ami: "ami-0bd66a6f"
associate_public_ip_address: "<computed>"
availability_zone: "<computed>"
ebs_block_device.#: "<computed>"
ephemeral_block_device.#: "<computed>"
instance_state: "<computed>"
instance_type: "t2.micro"
ipv6_address_count: "<computed>"
ipv6_addresses.#: "<computed>"
key_name: "<computed>"
network_interface.#: "<computed>"
network_interface_id: "<computed>"
placement_group: "<computed>"
primary_network_interface_id: "<computed>"
private_dns: "<computed>"
private_ip: "<computed>"
public_dns: "<computed>"
public_ip: "<computed>"
root_block_device.#: "<computed>"
security_groups.#: "<computed>"
source_dest_check: "true"
subnet_id: "<computed>"
tags.%: "1"
tags.type: "terraform-test-instance"
tenancy: "<computed>"
volume_tags.%: "<computed>"
vpc_security_group_ids.#: "<computed>"
Plan: 2 to add, 0 to change, 0 to destroy.
Note diff only shows one resource, but 2 to add. If the count gets bumped to 4, diff will show 2 resources but 4 to add.
@apparentlymart and I have chatted about this and we think this can be fixed by simplifying the refresh graph expansion/walk behaviour - going to be looking into that now.
More details: It looks like the bug here is superfical - if one saves the plan to disk, and then dumps that plan, the correct counts are displayed:
$ terraform plan -out tfplan
...
Your plan was also saved to the path below. Call the "apply" subcommand
with this plan file and Terraform will exactly execute this execution
plan.
Path: tfplan
+ aws_instance.instance.2
ami: "ami-0bd66a6f"
associate_public_ip_address: "<computed>"
availability_zone: "<computed>"
ebs_block_device.#: "<computed>"
ephemeral_block_device.#: "<computed>"
instance_state: "<computed>"
instance_type: "t2.micro"
ipv6_address_count: "<computed>"
ipv6_addresses.#: "<computed>"
key_name: "<computed>"
network_interface.#: "<computed>"
network_interface_id: "<computed>"
placement_group: "<computed>"
primary_network_interface_id: "<computed>"
private_dns: "<computed>"
private_ip: "<computed>"
public_dns: "<computed>"
public_ip: "<computed>"
root_block_device.#: "<computed>"
security_groups.#: "<computed>"
source_dest_check: "true"
subnet_id: "<computed>"
tags.%: "1"
tags.type: "terraform-test-instance"
tenancy: "<computed>"
volume_tags.%: "<computed>"
vpc_security_group_ids.#: "<computed>"
Plan: 2 to add, 0 to change, 0 to destroy.
$ terraform plan tfplan
...
+ aws_instance.instance.2
ami: "ami-0bd66a6f"
associate_public_ip_address: "<computed>"
availability_zone: "<computed>"
ebs_block_device.#: "<computed>"
ephemeral_block_device.#: "<computed>"
instance_state: "<computed>"
instance_type: "t2.micro"
ipv6_address_count: "<computed>"
ipv6_addresses.#: "<computed>"
key_name: "<computed>"
network_interface.#: "<computed>"
network_interface_id: "<computed>"
placement_group: "<computed>"
primary_network_interface_id: "<computed>"
private_dns: "<computed>"
private_ip: "<computed>"
public_dns: "<computed>"
public_ip: "<computed>"
root_block_device.#: "<computed>"
security_groups.#: "<computed>"
source_dest_check: "true"
subnet_id: "<computed>"
tags.%: "1"
tags.type: "terraform-test-instance"
tenancy: "<computed>"
volume_tags.%: "<computed>"
vpc_security_group_ids.#: "<computed>"
Plan: 1 to add, 0 to change, 0 to destroy.
So while the output on the first plan is broken (still an issue), it looks like it's just the count during the plan that's messed up, not the whole plan/diff.
~For what it's worth, we're experiencing exactly two more resources each time as opposed to double.~
Please disregard my comment, we are now also seeing double (aws only resources)
(original report here: https://github.com/hashicorp/terraform/issues/15140)
When I saw that Terraform 0.9.10 was a hotfix, I assumed it was for this :(
The fix for this is merged in master, so I'm going to close this. It is included in 0.10.0-beta2 and should be included in the final 0.10.0 release as well.
Sorry again for the confusion this causes, and for the delay imposed on the fix by the 0.10.0 release cycle.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Same for AWS provider :smile: