can't go to v0.12 yet.
Terraform v0.11.13
google_container_cluster
part of the module
resource "google_container_cluster" "new_container_cluster" {
[...]
private_cluster_config {
enable_private_nodes = "${var.enable_private_nodes}"
master_ipv4_cidr_block = "${lookup(var.master, "master_ipv4_cidr_block", "")}"
}
calling the module
module "gke-cluster" {
[..]
master = {
network = "${module.apps_infra.vpc_self_link}"
subnetwork = "${module.apps_infra.network_self_link}"
master_ipv4_cidr_block = "192.168.0.0/28"
enable_private_nodes = true
}
Should create private GKE cluster
Error: Error running plan: 1 error(s) occurred:
module.gke-cluster.google_container_cluster.new_container_cluster: 1 error(s) occurred:
module.gke-cluster.google_container_cluster.new_container_cluster: 1 error occurred:
terraform applyworks on 2.11.0 and below.
Hey @allcloud-jonathan!
I'm unable to reproduce this, I tried the following config:
provider "google" {}
resource "google_container_cluster" "primary" {
name = "primary-cluster3"
zone = "us-central1-a"
private_cluster_config {
enable_private_nodes = "${var.enable_private_nodes}"
master_ipv4_cidr_block = "${lookup(var.master, "master_ipv4_cidr_block", "")}"
}
min_master_version = "1.11"
initial_node_count = 1
}
variable "master" {
type = "map"
default = {
"master_ipv4_cidr_block" = "192.168.0.0/28"
}
}
variable "enable_private_nodes" {
default = "true"
}
Are you able to repro this and make a config you can share?
@rileykarson Sorry to a while to find a minimal version as the module is quite complex and time limited.
module/main.tf
resource "google_container_cluster" "primary" {
name = "primary-cluster3"
location = "us-central1-a"
network = "${lookup(var.master, "network", "default")}"
subnetwork = "${lookup(var.master, "subnetwork", "default")}"
private_cluster_config {
enable_private_nodes = "${var.enable_private_nodes}"
master_ipv4_cidr_block = "${lookup(var.master, "master_ipv4_cidr_block", "")}"
}
min_master_version = "1.11"
initial_node_count = 1
}
variable "master" {
type = "map"
default = {
"master_ipv4_cidr_block" = "192.168.0.0/28"
}
}
variable "enable_private_nodes" {
default = "true"
}
main.tf
provider "google" {}
resource "google_compute_network" "test" {
name = "apps-ci"
auto_create_subnetworks = "false"
routing_mode = "GLOBAL"
}
resource "google_compute_subnetwork" "test" {
name = "apps-gke-ci-net"
ip_cidr_range = "10.100.10.0/23"
network = "${google_compute_network.test.self_link}"
}
module "gke" {
source = "./module/"
enable_private_nodes = true
master = {
master_ipv4_cidr_block = "192.168.3.0/28"
network = "${google_compute_network.test.self_link}"
subnetwork = "${google_compute_subnetwork.test.self_link}"
}
}
$ /usr/local/bin/terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: 1 error occurred:
* master_ipv4_cidr_block must be set if enable_private_nodes == true
on module/main.tf line 1, in resource "google_container_cluster" "primary":
1: resource "google_container_cluster" "primary" {
$ /usr/local/bin/terraform version
Terraform v0.12.8
+ provider.google v2.14.0
Facing this issue on [email protected] as well
Where value for master_ipv4_cidr_block is coming from another module's output(which is in the same file)
It is also failing even if I add depends_on on that module.
It works if I manually apply the other module 1st and then run plan on full
Thanks @allcloud-jonathan! And thanks for the confirmation @AkarshSatija.
I'm low on cycles to revisit this right now, but I'm hoping to take a look again late this week / early next. If we hit next Wed and I haven't posted an update, please feel free to bump this issue to get my attention.
Hey @rileykarson, just wanted to bump this issue.
Got it, I looked into this and it looks like using lookup to get the element out of the map is confusing Terraform. While the value should be available at plan time for validation, it isn't for whatever reason- using a top level variable and using the map directly with var.master["master_ipv4_cidr_block"] both work.
I'll file an issue upstream on hashicorp/terraform + bring up removing the validation we're hitting.
After playing with this some more, this is a really unfortunate and unintuitive interaction. Terraform has a concept of "known" and "unknown" values at plan time; known values are sourced from the user's config and from the provider as defaults. Unknown values are typically outputs, sourced after creating the resource.
In your config, intuitively, enable_private_nodes and master["master_ipv4_cidr_block"] are known (they're provided in config!) and master["subnetwork"] is unknown.
In practice, though, since master is a map, any unknown values in the map cause Terraform to treat the entire map as unknown. Therefore since the "subnetwork" entry is unknown, "master_ipv4_cidr_block" is unknown as well.
We have a validation in the provider that asserts "if enable_private_nodes is set to true, and master_ipv4_cidr_block is not set, throw an error". Implicitly, the first bit assumes that enable_private_nodes is a known value because it won't have a value otherwise (and can't be true). What we didn't account for was the case where master_ipv4_cidr_block could be set by the user but unknown to Terraform. An unset field and an unknown field both appear as their type's empty value to the validation.
If both enable_private_nodes and master_ipv4_cidr_block are set through the master map, they're both unknown and we skip the validation because enable_private_nodes appears to be false. If they're both top-level, both values will be known and the validation will succeed.
Luckily, we have a function that allows us to check whether a value is known, and I can fix this behaviour. Speaking pragmatically, I'd advise you to avoid mixing known and unknown values using maps like this in the future. This isn't something we [provider authors] have a great mechanism to test for, and I expect similar issues to pop up every once in a while.
Thanks @rileykarson for the detailed explanation. I understand that my use of the map is a little suboptimal. Will work on optimising the my code.
What @AkarshSatija described in using the output of a module is rather common, I'd say though.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!