Terraform v0.12.1
+ provider.acme v1.3.0
+ provider.archive v1.2.2
+ provider.aws v2.11.0
+ provider.local v1.2.2
+ provider.null v2.1.2
+ provider.random v2.1.2
+ provider.template v2.1.2
+ provider.tls v2.0.1
module "instance" {
source = "./instance"
foo = null
}
variable "foo" {
default = "bar"
}
Default value is used
https://www.terraform.io/docs/configuration/expressions.html
Finally, there is one special value that has no type:
null: a value that represents absence or omission. If you set an argument of a resource or module to null, Terraform behaves as though you had completely omitted it โ it will use the argument's default value if it has one, or raise an error if the argument is mandatory. null is most useful in conditional expressions, so you can dynamically omit an argument if a condition isn't met.
Hi @bohdanyurov-gl! Sorry for this inconsistency, and thanks for reporting it.
The key subtlety which that text is implying but not explicitly stating is that count
is not "an argument of a resource"... it is a so-called "meta-argument" which is handled by Terraform Core itself, and has some special behaviors associated with it due to how it affects the construction of the graphs Terraform uses.
In particular, the presence of count
cannot be conditional because its presence causes references to the resource to produce a sequence of objects rather than a single object, and so the meaning of downstream references (which is analyzed and validated statically) depends on whether it is set.
I assume your goal here was for count
to be zero if the variable is null. If so, one way to write that is coalesce(var.iam_enable_ssm_access, false)
to provide a default value of false
when that variable is not set.
We'll use this issue to represent finding a way to be more explicit about this requirement in the documentation. Thanks again for reporting this!
I'm having a similar issue, but I think I'm using a null
value more inline with what the documentation says.
$ terraform -v
Terraform v0.12.5
+ provider.alks v1.3.0
+ provider.archive v1.2.0
+ provider.aws v2.21.1
+ provider.external v1.2.0
+ provider.null v2.1.0
+ provider.random v2.2.0
+ provider.template v2.1.0
# main.tf
variable "env" {
default = "dev"
type = "string"
}
module "rds" {
source = "./modules/rds"
instance_class = var.env == "prod" ? "db.m5.large" : null
}
# modules/rds/main.tf
variable "instance_class" {
default = "db.t2.small"
type = string
}
resource "aws_db_instance" "main" {
instance_class = var.instance_class
}
Error: "instance_class": required field is not set
on ../modules/rds/main.tf line 15, in resource "aws_db_instance" "main":
15: resource "aws_db_instance" "main" {
So in this case, it looks like Terraform is taking the null
value provided to the rds
module and using it as a literal value, all the way to the aws_db_instance
resource, rather than picking up the default value specified in the sub-module.
Is this expected behavior?
EDIT: my intent here is to say, "if this is production, use a larger instance, otherwise, whatever default the module provides is fine"
I encountered something that _might_ be related:
I have this a Google Cloud SQL instance that may be either zonal or regional based on a bool
var.is_regional
:
resource "google_sql_database_instance" "sql_instance" {
# โฆ
settings {
# โฆ
location_preference {
zone = var.is_regional ? null : var.cluster_location
}
}
}
When var.is_regional
is set to true, Terraform does not seem to set a preferred zone.
The resource was automatically assigned the zone europe-west1-d
by Google Cloud. However, Terraform treats the null
in the config as a literal value(?), so now I always get this diff when running terraform plan
or terraform apply
:
~ location_preference {
- zone = "europe-west1-d" -> null
}
I don't know how to best handle this case. The best outcome for me would be to remove the whole block if var.is_regional
is true
, but I don't believe that's possible.
We are seeing the same behavior on vSphere as @theneva is reporting Terraform keeps thinking the resource has changed from the sane defaults
`
- boot_delay = 0 -> null
boot_retry_delay = 10000
- boot_retry_enabled = false -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- custom_attributes = {} -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- extra_config = {} -> null
'
Small sample of the attributes affected, above.
Edit: Turns out that this was all linked to a null value (like above) being passed to Organization Name and Full_Name (this appears to be new behavior in TF 0.12 but I will confirm) which caused the resource to be destroyed and recreated. Setting those variables instead of defaults resolved the issue.
windows_options {
organization_name = "${var.win_organization_name}"
full_name = "${var.win_full_name}"
<Snip>
}
I'm having a similar issue, but I think I'm using a
null
value more inline with what the documentation says.$ terraform -v Terraform v0.12.5 + provider.alks v1.3.0 + provider.archive v1.2.0 + provider.aws v2.21.1 + provider.external v1.2.0 + provider.null v2.1.0 + provider.random v2.2.0 + provider.template v2.1.0
# main.tf variable "env" { default = "dev" type = "string" } module "rds" { source = "./modules/rds" instance_class = var.env == "prod" ? "db.m5.large" : null }
# modules/rds/main.tf variable "instance_class" { default = "db.t2.small" type = string } resource "aws_db_instance" "main" { instance_class = var.instance_class }
Error: "instance_class": required field is not set on ../modules/rds/main.tf line 15, in resource "aws_db_instance" "main": 15: resource "aws_db_instance" "main" {
So in this case, it looks like Terraform is taking the
null
value provided to therds
module and using it as a literal value, all the way to theaws_db_instance
resource, rather than picking up the default value specified in the sub-module.Is this expected behavior?
EDIT: my intent here is to say, "if this is production, use a larger instance, otherwise, whatever default the module provides is fine"
yup, experiencing similar behaviour while trying to use module defaults. Any suggestions to this ๐คทโโ ? (tf v0.12.9)
@theneva I believe you could use dynamic
and write such condition in for_each
block so it returns empty list in case you don't want block to be created. Hacky but could work.
Clever! I'm currently checking out Pulumi (because of this class of problems, and support for Kubernetes custom resource definitions), but might give it a shot if we decide to stick with Terraform :smile:
Hi @bohdanyurov-gl! Sorry for this inconsistency, and thanks for reporting it.
The key subtlety which that text is implying but not explicitly stating is that
count
is not "an argument of a resource"... it is a so-called "meta-argument" which is handled by Terraform Core itself, and has some special behaviors associated with it due to how it affects the construction of the graphs Terraform uses.In particular, the presence of
count
cannot be conditional because its presence causes references to the resource to produce a sequence of objects rather than a single object, and so the meaning of downstream references (which is analyzed and validated statically) depends on whether it is set.I assume your goal here was for
count
to be zero if the variable is null. If so, one way to write that iscoalesce(var.iam_enable_ssm_access, false)
to provide a default value offalse
when that variable is not set.We'll use this issue to represent finding a way to be more explicit about this requirement in the documentation. Thanks again for reporting this!
Hi @apparentlymart, I think this issue should still be labelled as "bug" and not "documentation" as this is happening on regular arguments as shown in @bohdanyurov-gl foo
example ( independently of count
).
#test.tf
module instance {
source = "./instance"
foo = null
}
```hcl
variable foo {
default = "bar"
}
```hcl
#instance/main.tf
resource "null_resource" instance_foo {
provisioner "local-exec" {
command = "echo ${var.foo}"
}
}
gives the following error after terraform apply
on test.tf
module.instance.null_resource.instance_foo: Creating...
Error: Invalid template interpolation value: The expression result is null. Cannot include a null value in a string template.
I can reproduce this with the use case in the comment immediately above mine. I'm working around it with
variable "image" {
type = string
// when this issue is resolved, set this to the actual default image we want
// https://github.com/hashicorp/terraform/issues/21702
default = ""
}
And then inline in the module definition
image = coalesce(var.image, "ubuntu/bionic")
or whatever default value you want to actually use.
reproduced with Terraform v0.12.20
I'm having issues where terraform is not omitting property when "null" returned from lookup() like this: ip_restriction = lookup(site_config.value, "ip_restriction", null)
This construction works fine for all string properties and not for this particular property that expects a list of objects.
This property successfully ignored when I set it's value to "null" (without lookup).
Is this related?
reproduced with Terraform v0.12.21
Is there a fix / workaround ?
When running terraform plan on version0.12.23,i am facing following issue,default value is being replaced with null. This was working fine in terraform0.11.11
Current
{
- action = "allow"
- cidr_block = "172.16.12.0/23"
- from_port = 0
- icmp_code = 0
- icmp_type = 0
- ipv6_cidr_block = ""
- protocol = "-1"
- rule_no = 106
- to_port = 0
},
New
+ {
+ action = "allow"
+ cidr_block = "172.16.12.0/23"
+ from_port = 0
+ icmp_code = null
+ icmp_type = null
+ ipv6_cidr_block = null
+ protocol = "-1"
+ rule_no = 106
+ to_port = 0
},
@jbardin, @bohdanyurov-gl , @apparentlymart - could we set it back to "bug" as described above
i see the same behavior on 0.12.24 :
logging_config {
bucket = var.s3_bucket_access_logs == "" ? null : var.s3_bucket_access_logs
prefix = var.s3_access_logs_prefix == "" ? null : var.s3_access_logs_prefix
include_cookies = var.log_cookies == "" ? null : var.log_cookies
}
return Error: "default_cache_behavior.0.lambda_function_association.0.lambda_arn": required field is not set
with all three variables defaulted to ""
and same behavior if the assignment is direct
bucket = var.s3_bucket_access_logs == "" ? null : var.s3_bucket_access_logs
with values defaulted to null
On 0.12.25, expanding a bit on @scodeman 's response. I'm trying to use this to safely wrap a base module with a few differences. A minimal example:
$ tree /tmp/terraform_minimum_case/
/tmp/terraform_minimum_case/
โโโ base
โย ย โโโ main.tf
โโโ wrapper
โโโ main.tf
$ cat terraform_minimum_case/base/main.tf
resource "null_resource" base {
provisioner "local-exec" {
command = "echo hello ${var.foo}"
}
}
variable foo {
default = "world"
}
$ cat terraform_minimum_case/wrapper/main.tf
module "wrapped_base" {
source = "/tmp/terraform_minimum_case/base"
foo = var.bar == "" ? null : var.bar
}
variable bar {
default = ""
}
$ terraform apply
Error: Invalid template interpolation value: The expression result is null. Cannot include a null value in a string template.
$ terraform apply -var 'bar=mundo'
module.wrapped_base.null_resource.base (local-exec): Executing: ["/bin/sh" "-c" "echo hello mundo"]
module.wrapped_base.null_resource.base (local-exec): hello mundo
This also happens for required resources, such as the aws_ami data resource. (i.e. Null values are not allowed for this attribute value.
when the default is clearly set in the base
module as I have this set up)
This seems to be what null
is supposed to be for, so this does indeed seem like a bug.
This is still broken on 0.13.2
This is still broken on 0.13.3
Is this solved by module_variable_optional_attrs
in the 0.14 release here? https://github.com/hashicorp/terraform/releases/tag/v0.14.0
Is this solved by
module_variable_optional_attrs
in the 0.14 release here? https://github.com/hashicorp/terraform/releases/tag/v0.14.0
I dont think so. That's a different feature for a different purpose.
Most helpful comment
I'm having a similar issue, but I think I'm using a
null
value more inline with what the documentation says.So in this case, it looks like Terraform is taking the
null
value provided to therds
module and using it as a literal value, all the way to theaws_db_instance
resource, rather than picking up the default value specified in the sub-module.Is this expected behavior?
EDIT: my intent here is to say, "if this is production, use a larger instance, otherwise, whatever default the module provides is fine"