Terraform: Allow destroy-time provisioners to access variables

Created on 14 Dec 2019  Â·  42Comments  Â·  Source: hashicorp/terraform

Current Terraform Version

Terraform v0.12.18

Use-cases


Using a local-exec provisioner when=destroy in a null resource, to remove local changes; currently, the setup provisioner has:
interpreter = [var.local_exec_interpreter, "-c"]

Attempted Solutions


Include:
interpreter = [var.local_exec_interpreter, "-c"]
in the 'when=destroy' provisioner.

This works for the moment, but produces a deprecation warning.

Proposal


The most straightforward way to address this would be to allow variables to be used in destroy provisioners. Direct use of passed-in variables does not present the danger of cyclic dependences created by resource references.

A more ambitious alternate solution would be to allow resources to declare arbitrary attributes at create time, which could be referenced at destroy time. E.g.:

resource "null_resource" "foo" {
   ...
   self {
    interpreter = var.local_exec_interpreter
  }
  provisioner {
    when = destroy
    interpreter = self.interpreter
    ...
  }
}

This would allow the provisioner (or other resources) to access values calculated in state
before application. It would require 2-pass evaluation (AFAIK), and thus a much more ambitious change to the code base. Even if the community likes this, it would seem quicker, (and not really hacky) to allow reference to passed in variables in destroy provisioners.

References


https://github.com/hashicorp/terraform/issues/23675

config enhancement

Most helpful comment

My use case is granting permissions (injecting the connection key) using external variables when running terraform.
On destroy, i have some pre-destroy cleanup to do on the machine and some external services.

How would one accomplish that now ?

connection {
    type        = "ssh"
    user        = var.vm_admin_user
    private_key = var.ops_private_key
    host        = var.vm_ip_address
}

All 42 comments

The second use case seems especially important, to be able to define variables in the state of the null_resource.

In our use case, the null_resource provisionner receives the IP of the VM throught its connection block. But now, since the connection block is built using variables, even if the destroy provisionner does not use any variable, it emits a deprecation warning.

Hi @shaunc,

You can already use triggers in null_resource as a place to retain data you need at destroy time:

resource "null_resource" "foo" {
  triggers {
    interpreter = var.local_exec_interpreter
  }
  provisioner {
    when = destroy

    interpreter = self.triggers.interpreter
    ...
  }
}

We don't intend to make an exception for referring to variables because within descendant modules a variable is just as likely to cause a dependency as anything else. While it is true that _root module_ variables can never depend on anything else by definition, Terraform treats variables the same way in all modules in order to avoid a situation where a module would be invalid as a descendant module but not as a root module.

We'd be interested to hear what your use-case is for customizing the interpreter via a variable like that. Usually a provisioner's interpreter is fixed by whatever syntax the command itself is written in. It may be more practical to address whatever was causing you to do this in the first place and make it unnecessary to do so.

Thanks for the explanation. The interpreter is customized just because I was contributing to terraform-aws-eks-cluster -- you'd have to ask them about the design; I do imagine that there are many more uses for variables in when=destroy provisioners, though.

However, I think that using them in triggers may be exactly what I meant when I talked about "two passes". I'm curious in your implementation why referring to variables via triggers doesn't cause loops, while referring to them directly does. And whether direct reference could just be syntactic sugar for reference via trigger. Is there a declarative semantic difference between the two?

I think I'm running into a similar situation for the same reasons, though perhaps more mundane. @shaunc's proposals would seem to have merit. Like him, I also understand the concerns around possibly unknown state at destroy-time.

While the documentation currently indicates this should be possible, I'm getting the deprecation warning for the connection block when setting the ssh private key from a variable:

  connection {
    host        = coalesce(self.public_ip, self.private_ip)
    type        = "ssh"
    user        = "ec2-user"
    private_key = file(var.u_aws_keypath)
  }

This is problematic because the private key file isn't stored locally in the same place on my local mac as, say, my co-worker's windows laptop. This crops up in a couple of other places for us with u_* user-supplied (ie from terraform.tfvars) variables. For example:

  provisioner "local-exec" {
    when    = destroy
    command = "knife node delete ${var.node_short_name}-${substr(self.id, 2, 6)} -c ${var.u_knife_rb} -y"
    on_failure = continue
  }

The -c /path/to/knife.rb file location, like the ssh private key, will vary. It doesn't affect the resource itself, but it is required to act sort of on behalf of the resource, or in the resource's name.

Relatedly, while not a user-supplied variable, var.node_short_name, can also not be specified here as a variable, per the deprecation. While it would technically be possible to hard-code the value of var.node_short_name - it strongly violates DRY and makes our Terraform code far less flexible.

To elaborate the example briefly, our vault cluster consists of AWS instances whose names all begin with vault- and end with a portion of the AWS instance ID - so vault-12345a, vault-bcdef0 etc. The value vault obviously will change from stack to stack, depending on the application, but the use of var.node_short_name ideally stays constant between modules and different portions of the TF code.

AFAIK, in none of these cases are we able to store these values in the aws_instance resource itself for use later at destroy-time via self.*. Even if we were, that only covers some of the issue.

Local environment configuration (ie the path to the private key or knife.rb) will vary. If I understand it correctly, that's the idea behind terraform.tfvars and --var-file=?

My use case is granting permissions (injecting the connection key) using external variables when running terraform.
On destroy, i have some pre-destroy cleanup to do on the machine and some external services.

How would one accomplish that now ?

connection {
    type        = "ssh"
    user        = var.vm_admin_user
    private_key = var.ops_private_key
    host        = var.vm_ip_address
}

It would be nice if the error message could be improved.

When you read:
Error: local-exec provisioner command must be a non-empty string
and your config looks like:

provisioner "local-exec" {
    command = "echo ${some_resource.some_attribute}"
}

it's not at all obvious that the root cause is the variable cannot be interpolated.

You can already use triggers in null_resource as a place to retain data you need at destroy time

Unfortunately, those triggers also cause a new resource to be created if they change. If you have more complex idempotence requirements then this won't work.

In my case I compare data from a data source with my local config and derive a trigger value from that. E.g. make the trigger foo if they differ and bar if they're the same.

If the trigger value changes, _then_ I need access to the data/config so I can do things.

If I were to place the actual data/config into the trigger block then my null_resource would get recreated when it's not necessary (which can be destructive).

I like the fact that terraform will isolate the destruction provisioner. But that _does_ necessitate an additional block in the null_resource for storing values that should _not_ trigger recreation of the null_resource.

there's also the use case of:

provisioner "local-exec" {
  when    = destroy
  command = format("%s/scripts/tag.sh", path.module)
  ...

if i want to use a local-exec provisioner for a null_resource in a module i'll need access to path.module.

it's not really cool if all my resources get recreated if the path of the module changes (which is what would happen if i stored this value in the trigger config of the null_resource).

@sdickhoven the path.module use case is addressed in the recently-closed https://github.com/hashicorp/terraform/issues/23675

I like the idea of a separate block of variables that don't trigger create on change, but are usable in destroy. path.module is just one of many possible things that might be needed; even if path.module is what you need, you often want it in combination with something else, and not referencing the computed value means your code isn't DRY.

I'm having a similar issue with

locals {
  cmd1 = "aws cmd --profile ${var.account_profile} ..."
  cmd2 = "aws cmd --profile ${var.other_account_profile} ..."
}

resource "null_resource" "aws_create_vpc_asocciation_auth" {
  triggers = {
    vpc_id = var.app_vpc_id
  }

  provisioner "local-exec" {
    command = "${local.cmd1} && sleep 30"
  }

  provisioner "local-exec" {
    when    = destroy
    command = "${local.cmd2} && sleep 30"
  }
}

which also results in

```Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.
```

This wasn't the case with 0.11, and I'm hitting this now with 0.12 (upgrading atm)

I'm using a destroy provisioner to uninstall an application on a remote device, and the only way to provide connection details currently is to hard-code them.

The triggers workaround is not a proper workaround as it produces significantly different behavior. One solution would be to introduce a new meta-argument that has no effect for storing arbitrary data.

Also, would someone from the TF team mind providing an example of how this could produce a circular reference? It's not really clear to me why preventing direct use but allowing indirect use through triggers would change the range of possible dependency graphs - if there are any troublesome patterns, it seems like you could produce them just as easily going through triggers, no?

I have yet another use case where I get this warning and I can't tell how to avoid it. I have aws_instance, aws_ebs_volume and aws_volume_attachment. The instances are rebuilt frequently but the EBS volumes are kept. When the aws_volume_attachment is destroyed, which happens before the instance is destroyed, I need to run a command on the instance to stop cleanly the services that rely on the storage, or they'll crash badly and leave corrupted data. I was able to accomplish this with a destroy-time remote-exec provisioner on the aws_volume_attachment. But after upgrading to 0.12.20 I see this deprecation warning which I would like to address.

resource "aws_volume_attachment" "kafka-broker-data-encrypted" {
  count       = var.cluster_size
  device_name = var.encrypted_data_device
  instance_id = aws_instance.kafka-broker.*.id[count.index]
  volume_id   = aws_ebs_volume.kafka-data-encrypted.*.id[count.index]

  provisioner "remote-exec" {
    when = destroy
    connection {
      host        = aws_route53_record.kafka-broker.*.name[count.index]
      type        = "ssh"
      user        = "ec2-user"
      private_key = file("${path.root}/keys/${var.ssh_key_name}.pem")
      script_path = "/home/ec2-user/stop.sh"
    }
    script = "${path.module}/templates/stop.sh"
  }
}

Our use case is bastion hosts/users for remote execs. They work fine for the forward provisioners, but not the destroy time provisioner.

https://github.com/brightbox/kubernetes-cluster/blob/21169f9c575316eda10340c95857904fcca89855/master/main.tf#L108

The bastion user is calculated from the creation of the bastion host with Terraform - which in turn depends upon which cloud operating system image the end user selects.

Again I can't see how to get around this without splitting the runs into two.

We are also running into something similar. Following @apparentlymart's suggestion in this thread I tried to rewrite the following problematic template:

resource "null_resource" "kops_delete_flag" {
  triggers = {
    cluster_delete = module.kops.should_recreate
  }

  provisioner "local-exec" {
    when = destroy

    command = <<CMD
    if kops get cluster "${local.cluster_domain_name}" --state "s3://${local.kops_state_bucket}" 2> /dev/null; then
      kops delete cluster --state "s3://${local.kops_state_bucket}" --name "${local.cluster_domain_name}" --yes
    fi
CMD
  }
}

into

resource "null_resource" "kops_delete_flag" {
  triggers = {
    cluster_delete = module.kops.should_recreate

    cluster_domain_name = local.cluster_domain_name
    kops_state_bucket = local.kops_state_bucket
  }

  provisioner "local-exec" {
    when = destroy

    command = <<CMD
    if kops get cluster "${self.triggers.cluster_domain_name}" --state "s3://${self.triggers.kops_state_bucket}" 2> /dev/null; then
      kops delete cluster --state "s3://${self.triggers.kops_state_bucket}" --name "${self.triggers.cluster_domain_name}" --yes
    fi
CMD
  }
}

This creates two problems:

  • Adding triggers causes the null_resource to be replaced, which would in turn cause the K8S cluster to get deleted, which is highly undesirable
  • Even if deleting the cluster was acceptable, terraform refuses to do it with the following error:
    ```
    null_resource.kops_delete_flag: Destroying... [id=2536969715559883194]
Error: 4 problems:    

- Missing map element: This map does not have an element with the key "cluster_domain_name".
- Missing map element: This map does not have an element with the key "kops_state_bucket".
- Missing map element: This map does not have an element with the key "kops_state_bucket".
- Missing map element: This map does not have an element with the key "cluster_domain_name".

```

Here's a use-case for variable access: I need to work around https://github.com/hashicorp/terraform/issues/516 / https://support.hashicorp.com/hc/en-us/requests/19325 to avoid Terraform leaking the database master password into the state file.

Note for Hashicorp staff: All of this would be unnecessary if Terraform had the equivalent of CloudFormation's resolve feature or something similar to make aws_ssm_parameter safe to use. We've requested this repeatedly with our account reps. Is there another way we can get that taken seriously?

The solution I have was to store that value in SSM using KMS and have a local-exec provisioner populate the RDS instance as soon as it's created. This works well except that it's now dependent on that destroy provisioner to clean up the SSM parameter before deleting the KMS instance.

resource "aws_kms_key" "SharedSecrets" {
  …
  # See the note in the README for why we're generating the secret which this
  # key is used for outside of Terraform. The new password must comply with the
  # restrictions in the RDS documentation:
  # https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html

  provisioner "local-exec" {
    command = "./scripts/set-db-password-parameter ${aws_kms_key.SharedSecrets.key_id} ${local.db_password_ssm_parameter_name}"
  }

  provisioner "local-exec" {
    when    = destroy
    command = "aws ssm delete-parameter --name ${local.db_password_ssm_parameter_name}"
  }
}
resource "aws_db_instance" "postgres" {
…
  # This is the other side of the dance described in the README to avoid leaking
  # the real password into the Terraform state file. We set the DB master
  # password to a temporary value and change it immediately after creation. Note
  # that both this and the new password must comply with the AWS RDS policy
  # requirements:
  # https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html
  password = "rotatemerotatemerotateme"

  provisioner "local-exec" {
    command = "./scripts/set-db-password-from-ssm ${self.identifier} ${local.db_password_ssm_parameter_name}"
  }
}

I had the same problem as @Bowbaq. Presumably it's because the saved tfstate doesn't have an old value saved for the new trigger.
I was able to work around it by commenting out the delete provisioner, "deleting" the resource with terraform destroy --target=module.my_module.null_resource.my_name, actually deleting it by manually executing the local-exec command (in my case, kubectl delete), and then re-applying.
But this is both a pain, and doesn't solve the "Adding triggers causes the null_resource to be replaced" bit of the problem.

(In my case, the local variable I was referencing is based on ${path.root}, so it's possible I can rewrite to use the fix for #23675 instead.)

Hello,

Here is my use case:
I'm using a python script in order to empty an S3 bucket containing thousands of objects ("force_destroy = true" option of the aws_s3_bucket Terraform resource is too slow: more than 2h vs ~1min with python script)

locals {
  # Sanitize a resource name prefix:
  resource_name_prefix = replace(replace("${var.product_name}-${terraform.workspace}", "_", ""), " ", "")
  # data bucket name construction:
  data_bucket_name = "${local.resource_name_prefix}-${var.region}-data"
  tags = {
    "Environment" = terraform.workspace
    "Product"     = lower(var.product_name)
    "TechOwner"   = var.product_tech_owner_mail
    "Owner"       = var.product_owner_mail
  }
}

resource "aws_s3_bucket" "data" {
  bucket        = local.data_bucket_name
  force_destroy = true
  tags          = local.tags
  provisioner "local-exec" {
    when    = destroy
    command = "python ${path.module}/scripts/clean_bucket.py ${self.id} ${var.region} ${terraform.workspace}"
  }
}

I'm only referencing mandatory variables ( ${var.region} & ${terraform.workspace} ), so... Where is the cycle dependency risk ?

I have used this kind of stuff many time so far and never get a cycle dependency on it (creating/destroying infrastructure ten times a day).

Furthermore, is it really needed to set this kind of useful way of working "deprecated" ?
I'm ok with the "cycle dependency risk" warning (Cause, yes, it's just a nightmare to solve when it happens...), but, from my point of view, once you are aware about the risk, you can simply make required validation steps and validate the solution once you figured out if there was a real cycle dependency risk or not.

Is this issue really getting ignored? in Changelog of 0.13.0 is writtern, that the here described "Warnings" will result in Errors.

config: Inside provisioner blocks that have when = destroy set, and inside any connection blocks that are used by such provisioner blocks, it is now an error to refer to any objects other than self, count, or each [GH-24083]

https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md

@hashicorp-support / @apparentlymart : Is this issue here really getting ignored?

I apologize for leaving this enhancement request hanging for so long. I think the topic on this issue has shifted from the specific suggestion it started with, to a general discussion of how the https://github.com/hashicorp/terraform/pull/23559 deprecation and planned removal of destroy provisioner references negatively impacts people’s workflows.

I genuinely appreciate that people in this discussion have tried to help each other with workarounds, have contributed suggestions about how we might improve this behavior, and have generally done a lot of work to try and find a way to make the new behavior work. I want to be clear that I hear you. The amount of pain that this deprecation causes is greater than what I anticipated, and I want to communicate our current thinking.

The immediate trade-off at hand is that https://github.com/hashicorp/terraform/pull/24083 closes a large number of bugs that a lot of users have found very painful. While many of the people on this thread haven’t encountered cyclical dependency errors, many other users have; take a look at the linked issues it closes to see examples. We need to solve that category of bug, and limiting the scope of destroy provisioner access was by far the best way to do that. We can’t, and won’t, just undo this: inverting a complex graph turned out to be excessively complex, and after years of trying to fix it by coding around each edge case we decided we had to simplify in order for terraform as a whole to be more stable. As painful as this is for users here, there are also users impacted by cycles, with no or awkward workarounds, waiting for 0.13 to fix this.

The root issue here is that, as @dmrzzz called out in the initial report, a custom Terraform provider is the best way to do what most people are trying to do here. I think that people are using destroy provisioners because it’s a lot less work than writing a custom provider; I’m empathetic to why you decided to do that. It is indeed way easier than writing providers, and is an expedient way to work around other legitimate limitations.

As we prioritize our work, we have to prioritize the primary work flow, where it’s currently possible to write hard-to-troubleshoot graph cycles, over the escape hatch of using destroy provisioners to work around other problems, such as the difficulty of writing custom providers, how slow it is to destroy large S3 buckets, or as a way to avoid storing secrets in state. We’re not going to walk back that change, but we do hear your feedback, loud and clear, and based on this feedback will take a second look at whether there's a better path forward.

We’re going to do additional research on our options. I am hopeful that we can make technical improvements to address this use case, but it may be that the best we can do is to provide more explicit guidance about how to migrate away from this use case for provisioners. It will be several weeks before I can give you a good update, but I wanted to give this update in the meantime so you don’t think it’s gone unnoticed.

You can already use triggers in null_resource as a place to retain data you need at destroy time

Unfortunately, those triggers also cause a new resource to be created if they change.
...
I like the fact that terraform will isolate the destruction provisioner. But that _does_ necessitate an additional block in the null_resource for storing values that should _not_ trigger recreation of the null_resource.

It appears that for _some_ use cases I can do this:

resource "null_resource" "whatever" {
  triggers = {
    actual_trigger = ...
    dummy_trigger = ...
  }

  lifecycle {
    ignore_changes = [triggers["dummy_trigger"]]
  }
...
}

but with at least two important caveats:

  1. When I actually run the destroy-time provisioner, it will (of course) use the stored value of dummy_trigger from when the resource was first created, _not_ the value it would have if freshly computed today. That's fine for the use case of keeping a constant DRY, but might be totally useless in other scenarios.
  2. Adding a new key to the triggers map of an existing null_resource for the first time _will_ still trigger a replace, for reasons described in https://www.terraform.io/docs/configuration/resources.html#ignore_changes

Presumably OP's suggested new self {} block (if adopted by the TF team) would be superior to this workaround by letting us re-run terraform apply at will to _update_ the stored values _without_ replacing the objects.

Hi,
Using triggers is not applicable for me for security reasons.
On ressources creation, I need of a null ressource + local exec to build an object on remote server.
On ressources destroy, I need to remove this object on the remote server.
To connect to the remote server, I need of a token.

Using triggers would store the token in the tfstate which is not a good practise here.

So the workaround I imagine when I need to destroy the resources

  • terraform apply with a condition using a variable (destroy_remote_object = true) to create a new null resource + local exec using count statement to destroy the object on the remote server,
  • then terraform destroy of everything
    Regards

My use-case: I'm using docker-machine to handle the installation and configuration of docker on machines provisioned using terraform using a local-exec. Very simple and quick one-liner.

When I destroy, I also want docker-machine rm executed to remove the machine from being handled also by docker-machine as it won't exist any more.

We also depend on variables in the destroy provisioner command to revoke the Puppet certificate of a instance.

  provisioner "local-exec" {
    when    = destroy
    command = "${path.module}/scripts/puppet_cert_clean.sh ${local.instance_prefix}.${var.instance_name}${format("%02d", count.index + 1)}"
  }

Please allow more allow $path/$local/$var! :)

My use case involves first provisioning a server and then subsequently using the remote-exec provisioner inside of a null_resource to register the instance as a Gitlab runner. Upon destroy, I need to run a command against the server to unregister the instance as a Gitlab runner. I can think of ways to "hard code" all properties within the connection object aside from the host, which must be driven from the previously provisioned server. See the snippet included below for more information.

connection {
    type             = "ssh"
    user             = "ec2-user"
    private_key = "/tmp/id_rsa"
    host             = aws_instance.default.private_ip
  }

Hi all,

I finally found a workaround for my use case (s3 bucket resource, where "triggers" block is not allowed), hope it will help some of you:

locals {
  # Sanitize a resource name prefix:
  resource_name_prefix = replace(replace("${var.product_name}-${terraform.workspace}", "_", ""), " ", "")
  tags = {
    "Environment" = terraform.workspace
    "Product"     = lower(var.product_name)
    "TechOwner"   = var.product_tech_owner_mail
    "Owner"       = var.product_owner_mail
  }
}

resource "aws_s3_bucket" "data" {
  bucket            = "${local.resource_name_prefix}-${var.region}-data"
  force_destroy = true
  tags = merge(local.tags, {
    "Name"         = local.incoming_data_bucket_name
    "app:Region" = var.region
    "app:Profile"  = var.aws_cli_profile
  })
  provisioner "local-exec" {
    when        = destroy
    command = "python ${path.module}/scripts/clean_bucket.py -b ${self.id} -r ${self.tags["app:Region"]} -p ${self.tags["app:Profile"]}"
  }
}

Wow, thank you @sdesousa86! :) I just didn't see it but it was so obvious to just use self.name for my example above.

Similar to @davewoodward I'm using remote-exec with when = destroyto de-register a GitLab runner when terminating the instance (terraform destroy):

resource aws_instance "instance" {

...snip...

provisioner "remote-exec" {
    when    = destroy
    inline  = [
        "sudo gitlab-runner unregister --all-runners"
    ]

    connection {
      type     = "ssh"
      host     = self.public_ip
      user     = var.host_ssh_user # deprecated
      private_key = var.host_ssh_pubkey # deprecated
    }

We have sensitive variables in our destroy time provisionsers. Moving them to the triggers block writes them to the state file AND to the console which we've been trying extremely hard to avoid.

Is there an alternative option available?

@andrew-sumner one thing that comes to mind here is that if those sensitive variables are not written to state, then the Terraform core can't guarantee that they'll be available at destroy time. If you were to remove both the resource itself _and_ the variables it depended on, (which is perfectly allowed) and if those values aren't also stored in state, then there's no way to actually execute the destroy provisioner. To the extent that there are workarounds at all, they probably all require writing the values to state.

@xanderflood Why wouldn't the variables used to create a resource by present at destroy time? (I am newish to terraform so there might be some nuance I'm missing)

The variables in question are credentials that are provided by the config to the module that is using the local-exec provisioner so they will always be available, eg:

resource "null_resource" "external_resource" {
  triggers = {
    ResourceUrl  = var.ResourceUrl 
    UserName     = var.UserName
  }

  provisioner "local-exec" {
    command = "CreateSomething.ps1 -ResourceUrl ${self.triggers.ResourceUrl} -UserName ${self.triggers.UserName}"
    environment = {
      # Placed here to prevent output to console or state file
      PASSWORD = var.Password
    }
    interpreter = ["Powershell.exe", "-Command"]
  }

  provisioner "local-exec" {
    when    = destroy
    command = "DestroySomething.ps1 -ResourceUrl ${self.triggers.ResourceUrl} -UserName ${self.triggers.UserName}"
    environment = {
      # Placed here to prevent output to console or state file
      PASSWORD = var.Password
    }
    interpreter = ["Powershell.exe", "-Command"]
  }
}

Gives warning

Warning: External references from destroy provisioners are deprecated

  on ..\..\modules\example\main.tf line 41, in resource "null_resource" "external_resource":
  41:     environment = {
  42:       PASSWORD = var.Password
  44:     }

Preventing secrets from being displayed in console output is often a challenge in terraform (and in state file can be impossible) and this change removes one work around.

In our situation where we are experiencing these warnings is when we detach a volume from an EC2. We have a destroy-time provisioner that connects to the EC2 and stops some services prior to the detachment so that no active processes would be accessing or writing files to the volume. Not sure how we would do this otherwise. As an organization we are still somewhat new to Terraform so maybe we just haven't discovered an alternative yet.

@andrew-sumner It may be the case that in your particular workflow the variable will always be available at destroy time, my point was just that the TF core can't assume that they will be.

For instance, what would happen if you applied an empty configuration to your workspace? In that case, you're asking TF to tear down all resources, but the _only_ available information is the state file, so if you want your provisioner to be able to function at destroy time no matter how you choose to destroy things, you have no choice but to store all its prerequisites in the state.

That said, it would be great if we had the option to attach arbitrary values to a resource state _without_ using the triggers block, which has serious side effects.

@jheadley this is working well with me to detach the volume

 resource "aws_volume_attachment" "ebs_att" {
  count = var.enable_data_persistence ? 1 : 0
  device_name = "/dev/xvdf"
  volume_id   = var.persistent_volume_id
  instance_id = aws_instance.instance.id

  #This provisioner is added to stop the EC2 instance prior to volume detachment. It requires the aws-cli installed set to the region the instance is deployed in.
  provisioner "local-exec" {
    when   = destroy
    command = "aws ec2 stop-instances --instance-ids ${self.instance_id}"
  }

}

@jheadley this is working well with me to detach the volume

 resource "aws_volume_attachment" "ebs_att" {
  count = var.enable_data_persistence ? 1 : 0
  device_name = "/dev/xvdf"
  volume_id   = var.persistent_volume_id
  instance_id = aws_instance.instance.id

  #This provisioner is added to stop the EC2 instance prior to volume detachment. It requires the aws-cli installed set to the region the instance is deployed in.
  provisioner "local-exec" {
    when   = destroy
    command = "aws ec2 stop-instances --instance-ids ${self.instance_id}"
  }

}

Thank you @bcampoli we'll try that.

Here's my use case for variables. I have an ECS Cluster, and I'm provisioning Services and Task Definitions outside Terraform for it, to handle Blue/Green/Canary deployments.

When I destroy the cluster, I want to run a python script with local-exec and that script needs to assume a role on gitlab runner.

The configuration looks like this:

resource "aws_ecs_cluster" "cluster" {
  name = local.name

  setting {
    name  = "containerInsights"
    value = "enabled"
  }

  provisioner "local-exec" {
    when    = destroy
    command = "${path.module}/scripts/on_destroy_delete_services.py --aws-profile ${var.environment} --cluster-arn ${self.arn}"
  }

  tags = var.tags
}

I need to pass that variable somehow, for boto3 to be able to use proper AWS_PROFILE and assume the same role Terraform is using.

Here's my use case for variables. I have an ECS Cluster, and I'm provisioning Services and Task Definitions outside Terraform for it, to handle Blue/Green/Canary deployments.

When I destroy the cluster, I want to run a python script with local-exec and that script needs to assume a role on gitlab runner.

The configuration looks like this:

resource "aws_ecs_cluster" "cluster" {
  name = local.name

  setting {
    name  = "containerInsights"
    value = "enabled"
  }

  provisioner "local-exec" {
    when    = destroy
    command = "${path.module}/scripts/on_destroy_delete_services.py --aws-profile ${var.environment} --cluster-arn ${self.arn}"
  }

  tags = var.tags
}

I _need_ to pass that variable somehow, for boto3 to be able to use proper AWS_PROFILE and assume the same role Terraform is using.

One potential workaround is to simply expose the aws-profile as an environment variable prior to running the terraform destroy command and let your script read from the environment.

@bcampoli There are many workarounds:

  • I can store those in resource tags (not all resources have tags, this was just one example).
  • I can use null_resource with triggers (tricky as will trigger destroy when the trigger variables change, which is most of the time not what I want)
  • I can pass Env Variables (ugly, because it's magic, and while I can do this with Gitlab, I need to remember to set them correctly when running terraform locally).

That's not the point. The point is there are many other scenarios when local-exec is usefull, but needs external configuration. Disallowing access to variables removes half (if not more) od those scenarios. I understand that we should be using real providers whenever possible, but it's not always possible, and the cases when local-provisioner literary saves people ass are countless, as you can see from this thread.

I also get that there might be a problem with provisioners accessing dynamic variables, that have values resulting from other resources being created in other modules. But terraform already has a way of knowing if a variable value is "known" during plan, or not (vide: for / count issues with such variables).

Perhaps it's a good idea to at least allow those who are static, just come from a .tfvars file or are hardcoded in a module few levels up?

The "self" reference is great for 90% of my use case, however I'd really like to keep sensitive variables from being written to the console (not concerned about state).

A possible solution would be a "sensitive_triggers" parameter as follows. Local-exec would need to be smart enough not to display the sensitive values when logging the command and showing the plan.

resource "null_resource" "external_resource" {
  triggers = {
    ResourceUrl  = var.ResourceUrl 
    UserName     = var.UserName
  }
  sensitive_triggers = {
    Password       = var.Password
  }

  provisioner "local-exec" {
    command = "CreateSomething.ps1 -ResourceUrl ${self.triggers.ResourceUrl} -UserName ${self.triggers.UserName} -Password ${self.sensitive_triggers.Password}"
    interpreter = ["Powershell.exe", "-Command"]
  }

  provisioner "local-exec" {
    when    = "destroy"
    command = "DestroySomething.ps1 -ResourceUrl ${self.triggers.ResourceUrl} -UserName ${self.triggers.UserName} -Password ${self.sensitive_triggers.Password}"
    interpreter = ["Powershell.exe", "-Command"]
  }

I have been passing sensitive values to the script by using the local-exec environment parameter, the annoying thing with this approach has been that a change to the sensitive value won't trigger a re-run and I have to mark the resource as tainted. So a combination of a new sensitive_triggers parameter and environment parameter would work as well.

  provisioner "local-exec" {
   command = "DestroySomething.ps1 -ResourceUrl ${self.triggers.ResourceUrl} -UserName ${self.triggers.UserName}"
    environment = {
      Password = ${self.sensitive_triggers.Password}
    }
    interpreter = ["Powershell.exe", "-Command"]
  }

Hello,
Can anyone please summarize what's the current best practice to deploy a resource on whose destroy I need to run a script and pass a secret into it with Terraform 13?
Thanks!

Can anyone please summarize what's the current best practice to deploy a resource on whose destroy I need to run a script and pass a secret into it with Terraform 13?

Unfortunately the short answer is there isn't. Its sad that destroy time provisioners only ran at the best of times (Refer #13549), and now the fix for that has been to narrow the scope of their usability.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

zeninfinity picture zeninfinity  Â·  3Comments

pawelsawicz picture pawelsawicz  Â·  3Comments

carl-youngblood picture carl-youngblood  Â·  3Comments

rjinski picture rjinski  Â·  3Comments

ketzacoatl picture ketzacoatl  Â·  3Comments