Terraform: target including extra resources during plan

Created on 8 Mar 2018  ·  13Comments  ·  Source: hashicorp/terraform

Terraform Version

Terraform v0.11.3

  • provider.null v1.0.0

Terraform Configuration Files

resource "null_resource" "res1" {
  count  = "6"

  provisioner "local-exec" {
    command = "echo hello ${count.index}"
  }
}

resource "null_resource" "res2" {
  depends_on = ["null_resource.res1"]
}

Debug Output

https://gist.githubusercontent.com/jbardin/51a0f3893778dca95a46741550cc2830/raw/18c1f03b7155f56cbcae30ad91713f4d8c5c1f61/gistfile1.txt

Crash Output

none

Expected Behavior

Apply changes as stated out by plan

Actual Behavior

Nothing, the planned changes are not executed

Steps to Reproduce

terraform init
terraform plan -target=null_resource.res2 -out=/tmp/tf
terraform apply /tmp/tf
=> this works fine so for

Now increase count/cardinality of null_resource.res1 and rerun:
terraform plan -target=null_resource.res2 -out=/tmp/tf
terraform apply /tmp/tf
=> the planning works fine, but the apply will not touch any resource

Additional Context

For various reason I split of the configuration in several target (eg. because my PDNS provider will be setup by the OpenStack provider firstm, etc.). I do this using dedicated null rsources as targets which just reference the depending resources. Planing and applying the configuration target by target works fine. When afterwards scaling a depending resource by increasing the count value, the planning as showed above runs fine again. However, applying the plan nothing happens.

References

bug core v0.11 v0.12

All 13 comments

First line of code was missing:

resource "null_resource" "res1" {
count = "2"

provisioner "local-exec" {
command = "echo hello ${count.index}"
}
}

resource "null_resource" "res2" {
depends_on = ["null_resource.res1"]
}

Hi @scat70,

Sorry, I'm not sure exactly what you're expecting to happen here.
If you're only changing the count field on res1, there are no changes that should be applied to res2. What is it you expect to be applied to res2 in the second apply step?

Hi I would at least expect that, the apply is following the generated plan. While the plan is fine, it shows that it would follow the increased cardinality in res1 and would create the new resource:

Plan: 1 to add, 0 to change, 0 to destroy.

However, the apply is just doing nothing:

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Furtermore this only happens if in res2 there is just the depency and no action. If you add an action/resource in res2 everything is fine.

Hi @scat70,

Thanks, I see the discrepancy now. The apply step is working correctly, the targeted plan is incorrectly displaying extra nodes.

Hmm, for me it's the other way round. While planning TF detects that there is a new/missing task in res1 where res2 depends on thus plans to add the new/missing resource. For me this is the expected behaviour while the bug is in apply.

By the way, I have the same issue when adding a new third resource res3 and add it to the dependency list of res2. Planning for res2 detects the new/missing resources res3 and the generated plan is ok (res3 to be instanciated):

Plan: 3 to add, 0 to change, 0 to destroy.

But the apply again simply does nothing:

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

So this issue is not only when changing the cardinality of a resource.

@scat70,

If you are targeting res2, then the minimal dependency graph to apply is res2 by itself -- it should not include any other resources. The point of using -target=null_resource.res2 would be so that you could apply a change to res2 _without_ affecting any other resources. Since there is no change to res2, the correct result is 0 changes.

I am experiencing similar when targetting a firewall rule change like:
terraform plan -target google_compute_firewall.allow_www

The user metadata for an instance was also changed, but -target is being used here to preserve the instance as-is while modifying the firewall rule.

However, the plan output shows the firewall resource changing and the google_compute_instance resource as being destroyed and recreated.

I am experiencing this same issue:

```
❯ terraform version
Terraform v0.11.10

  • provider.aws v1.42.0
  • provider.template v1.0.0

❯ terraform plan -target=module.my_module.module.my_asg.aws_autoscaling_group.my_service -detailed-exitcode -out=my_plan
Terraform will perform the following actions:
~ module.my_module.aws_security_group.my_sg_1
~ module.my_module.aws_security_group.my_sg_2
Plan: 0 to add, 2 to change, 0 to destroy.

❯ terraform apply my_plan
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

❯ terraform plan -target=module.my_module.aws_security_group.my_sg_2 -detailed-exitcode -out=my_plan
~ module.my_module.aws_security_group.my_sg_2
Plan: 0 to add, 1 to change, 0 to destroy.

❯ terraform apply my_plan
module.my_module.aws_security_group.my_sg_2: Modifying...
module.my_module.aws_security_group.my_sg_2: Modifications complete after 1s (ID: sg-abc123)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

It happens all the time for us, it's pretty much a nightmare. You target an RDS database specifically, it'll by itself decide it needs to update bunch of other RDS databases too, etc. It's super frustrating given the fact that you can't edit a plan manually to remove those.

Same issue for me

Terraform v0.12.16
+ provider.aws v2.36.0

I've confirmed this is working in the current master branch (and likely has been for some time)
Closing for now.

Thanks!

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings