Terraform v0.12.6
locals {
test = true
}
resource "null_resource" "res" {
lifecycle {
prevent_destroy = locals.test
}
}
terraform {
required_version = "~> 0.12.6"
}
terraform init
The documentation notes that
[...] only literal values can be used because the processing happens too early for arbitrary expression evaluation.
so while I'm bummed that this doesn't work, I understand that I shouldn't expect it to.
However, we discovered this behavior because running terraform init
failed where it had once worked. And indeed, if you comment out the variable reference in the snippet above, and replace it with prevent_destroy = false
, it works - and if you then change it back _it keeps working_.
Is that intended behavior? And will it, if I do this workaround, keep working?
位 terraform init
2019/08/21 15:48:54 [INFO] Terraform version: 0.12.6
2019/08/21 15:48:54 [INFO] Go runtime version: go1.12.4
2019/08/21 15:48:54 [INFO] CLI args: []string{"C:\\Users\\Tomas Aschan\\scoop\\apps\\terraform\\current\\terraform.exe", "init"}
2019/08/21 15:48:54 [DEBUG] Attempting to open CLI config file: C:\Users\Tomas Aschan\AppData\Roaming\terraform.rc
2019/08/21 15:48:54 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/08/21 15:48:54 [INFO] CLI command args: []string{"init"}
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
Error: Variables not allowed
on main.tf line 7, in resource "null_resource" "res":
7: prevent_destroy = locals.test
Variables may not be used here.
Error: Unsuitable value type
on main.tf line 7, in resource "null_resource" "res":
7: prevent_destroy = locals.test
Unsuitable value: value must be known
Hi @tomasaschan,
prevent_destroy
cannot support references like that, so if you are not seeing an error then the bug is that the error isn't being shown; the reference will still not be evaluated.
Just ran into this but with a "normal" variable. It would be create if we can use variables in the lifecycle
block because without using variables I'm literally unable to use prevent_destroy
in combination with a "Destroy-Time Provisioner" in a module.
I'm hitting this, too. Please allow variables derived from static values to be used in lifecycle
blocks. This would let me effectively use modules to run dev & test environments with the same config as prod, while providing deletion protection for prod resources. AWS RDS has a deletion_protection
option that is easy to set. S3 Buckets have an mfa_delete
option which is difficult to enable. I found no way to prevent accidental deletion of an Elastic Beanstalk Application Environment.
module "backend" {
source = "../backend"
flavor = "dev"
...
}
resource "aws_elastic_beanstalk_environment" "api_service" {
lifecycle {
prevent_destroy = (var.flavor == "prod") // <-- error
}
...
}
Seen multiple threads like this. There is an ongoing issue (https://github.com/hashicorp/terraform/issues/3116) which is currently open but @teamterraform seem to have made that private to contributors only.
The need to set lifecycle properties as variables is required in a lot of production environments.
We are trying to give our development teams control of their infrastructure whilst maintaining standards using modules. Deployment is 100% automated for us, and if the dev teams need to make a change to a resource, or remove it then that change would have gone through appropriate testing and peer review before being checked into master and deployed.
Our modules need to be capable of having lifecycle as variables.
Can we get an answer as to why this is not supported?
My use case is very much like @weldrake13's. It would be nice to understand why this can't work.
I would also appreciate if Terraform allows variables for specifying "prevent_destroy" values. As a workaround, since we use the S3 backend for managing our Terraform workspaces, I block the access to the Terraform workspace S3 bucket for the Terraform IAM user in my shell script after Terraform has finished creating the prod resources. This effectively locks down the infrastructure in the workspace and requires a IAM policy change to re-enable it.
I write tests for my modules. I need to be able to re-run tests over and over. There's no way for me to delete buckets in a test account and set protection in a production account. Swing and a miss on this one.
Is there a general issue open with Terraform to improve conditional support? Off the top of my head I can think of the following limitations:
provider =
argument cannot use conditionalsAll of these make writing enterprise-level Terraform code difficult and more dangerous.
The same of: https://github.com/hashicorp/terraform/issues/3116
Can you close, please?
The same of: #3116
Can you close, please?
Hashicorp locked down 3116. If this gets closed then those following cant view the issue.
It's over 4 years since #3116 was opened, I think we'd all appreciate some indication of where this is? Is it still waiting on the proposal mentioned in this comment, #4149 ?
Thought I'd offer up a work around I've used in some small cases. Example here is a module for gcloud sql instance, where obviously in production I want to protect it, but more ephemeral environments I want to be able to pull the environment down without editing the code temporarily.
It's not pretty but it works, and is hidden away in the module for the most part:
### variables.tf
variable "conf" {
type = map(object({
database_version = string
...
prevent_destroy = string
}))
description = "Map of configuration per environment"
default = {
dev = {
database_version = "POSTGRES_9_6"
...
prevent_destroy = "false"
}
# add more env configs here
}
}
variable "env" {
type = string
description = "Custom environment used to select conf settings"
default = "dev"
}
### main.tf
resource "google_sql_database_instance" "protected" {
count = var.conf[var.env]["prevent_destroy"] == "true" ? 1 : 0
...
lifecycle {
prevent_destroy = "true"
}
}
resource "google_sql_database_instance" "unprotected" {
count = var.conf[var.env]["prevent_destroy"] == "false" ? 1 : 0
...
lifecycle {
prevent_destroy = "false"
}
}
### outputs.tf
output "connection_string" {
value = coalescelist(
google_sql_database_instance.protected.*.connection_name,
google_sql_database_instance.unprotected.*.connection_name,
)
description = "Connection string for accessing database"
}
Module originated prior to 0.12, so those conditionals could well be shortened using bool
now. Also I appreciate this is one resource duplicated, and it would be much worse elsewhere for larger configurations.
It is so funny. I am asking this question WHY? WHY?
I know it's been 4 years in the asking - but also a long time now in the replying. Commenting on #3119 was locked almost 2 years ago saying "We'll open it again when we are working on this".
Can someone with the inner knowledge of this "feature" work please step up and give us some definitive answers on simple things like:
Thanks for your work - Hashicorp - this tool is awesome! Not slanting at you, just frustrated that this feature is languishing and I NEED it ... Now....
@Penumbra69 and all the folks on here: I hear you, and the use cases you're describing totally make sense to me. I'm recategorizing this as an enhancement request because although it doesn't work the way you want it to, this is a known limitation rather than an accidental bug.
Hi team
Maybe a duplicate of https://github.com/hashicorp/terraform/issues/3116 ?
Most helpful comment
I'm hitting this, too. Please allow variables derived from static values to be used in
lifecycle
blocks. This would let me effectively use modules to run dev & test environments with the same config as prod, while providing deletion protection for prod resources. AWS RDS has adeletion_protection
option that is easy to set. S3 Buckets have anmfa_delete
option which is difficult to enable. I found no way to prevent accidental deletion of an Elastic Beanstalk Application Environment.