modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.Terraform v0.12.25 - Terraform Cloud
# Copy-paste your Terraform configurations here.
#
# For large Terraform configs, please use a service like Dropbox and share a link to the ZIP file.
# For security, you can also encrypt the files using our GPG public key:
# https://www.hashicorp.com/security
#
# If reproducing the bug involves modifying the config file (e.g., apply a config,
# change a value, apply the config again, see the bug), then please include both:
# * the version of the config before the change, and
# * the version of the config after the change.
resource "google_container_cluster" "cluster" {
provider = google-beta
name = var.CLUSTER_NAME
location = var.CLUSTER_ZONE
description = var.CLUSTER_DESCRIPTION
// https://www.terraform.io/docs/providers/google/r/container_cluster.html
// We can't create a cluster with no node pool defined, but we want to only use
// separately managed node pools. So we create the smallest possible default
// node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
enable_binary_authorization = false
enable_kubernetes_alpha = false
enable_legacy_abac = false
enable_shielded_nodes = true
enable_intranode_visibility = true
default_max_pods_per_node = 110
logging_service = "logging.googleapis.com/kubernetes"
monitoring_service = "monitoring.googleapis.com/kubernetes"
network = google_compute_network.cluster_vpc.self_link
subnetwork = google_compute_subnetwork.cluster_vpc_subnetwork.self_link
resource_labels = var.GOOGLE_LABELS
addons_config {
horizontal_pod_autoscaling {
disabled = false
}
http_load_balancing {
disabled = false
}
network_policy_config {
disabled = false
}
}
cluster_autoscaling {
enabled = false
}
database_encryption {
state = "ENCRYPTED"
key_name = google_kms_crypto_key.cluster_kms_key_etcd.self_link
}
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
maintenance_policy {
daily_maintenance_window {
start_time = "11:00"
}
}
master_auth {
client_certificate_config {
issue_client_certificate = false
}
}
master_authorized_networks_config {
dynamic cidr_blocks {
for_each = var.CLUSTER_MASTER_AUTHORIZED_IPS
content {
display_name = cidr_blocks.key
cidr_block = cidr_blocks.value
}
}
}
network_policy {
enabled = true
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
release_channel {
channel = "REGULAR"
}
}
The cluster must be created.
The plan was successful, but the apply got an error from the Google APIs
Error: googleapi: Error 400: DefaultMaxPodsConstraint can only be used if IpAllocationPolicy.UseIpAliases is true., badRequest
on .terraform/modules/k8s/main.tf line 81, in resource "google_container_cluster" "cluster":
81: resource "google_container_cluster" "cluster" {
terraform apply
The plan is run from Terraform Cloud.
Also, we are running this config every morning, with success until today.
With the provider.google v3.29 was failing for me as well.
I had to downgrade to terraform-provider-google_v3.14.0_x5 to make it work again.
I found a successful workaround with the beta argument networking_mode set to VPC_NATIVE
I wonder if the problem is the default blank cluster_ipv4_cidr_block and services_ipv4_cidr_block in ip_allocation_policy block which doesn't inferred the VPC_NATIVE mode for the cluster ?
FYI: My configuration now looks like this:
resource "google_container_cluster" "cluster" {
provider = google-beta
name = var.CLUSTER_NAME
location = var.CLUSTER_ZONE
description = var.CLUSTER_DESCRIPTION
// https://www.terraform.io/docs/providers/google/r/container_cluster.html
// We can't create a cluster with no node pool defined, but we want to only use
// separately managed node pools. So we create the smallest possible default
// node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
enable_binary_authorization = false
enable_kubernetes_alpha = false
enable_legacy_abac = false
enable_shielded_nodes = true
enable_intranode_visibility = true
default_max_pods_per_node = 110
logging_service = "logging.googleapis.com/kubernetes"
monitoring_service = "monitoring.googleapis.com/kubernetes"
networking_mode = "VPC_NATIVE" // Added to avoid cluster creation error
network = google_compute_network.cluster_vpc.self_link
subnetwork = google_compute_subnetwork.cluster_vpc_subnetwork.self_link
resource_labels = var.GOOGLE_LABELS
addons_config {
horizontal_pod_autoscaling {
disabled = false
}
http_load_balancing {
disabled = false
}
network_policy_config {
disabled = false
}
}
cluster_autoscaling {
enabled = false
}
database_encryption {
state = "ENCRYPTED"
key_name = google_kms_crypto_key.cluster_kms_key_etcd.self_link
}
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
maintenance_policy {
daily_maintenance_window {
start_time = "11:00"
}
}
master_auth {
client_certificate_config {
issue_client_certificate = false
}
}
master_authorized_networks_config {
dynamic cidr_blocks {
for_each = var.CLUSTER_MASTER_AUTHORIZED_IPS
content {
display_name = cidr_blocks.key
cidr_block = cidr_blocks.value
}
}
}
network_policy {
enabled = true
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
release_channel {
channel = "REGULAR"
}
}
Confirmed that the workaround is working just now, I spend a good 2-3 hours of wondering why just 2 days ago everything was finе....
Thanks @Eshanel you owe you one for this.
Latest working provider 3.28 as I can see in my cloudbuild history...
As @jahernandezmartinez13 pointed out, just using a previous version works. But since it took me a while to find the syntax for specifying versions, I'm posting what helped me here:
provider "google" {
version = "~> 3.18, != 3.29.0"
project = local.gcp_project
region = local.region
}
provider "google-beta" {
version = "~> 3.18, != 3.29.0"
project = local.gcp_project
region = local.region
}
Hi folks,
I'm just here to comment that I ran into a related issue with a different error message. While attempting to create a private cluster, I was receiving this error message:
Alias IP addresses are required for private cluster, please make sure you enable alias IPs when creating a cluster.
My ip_allocation_policy is using cluster_secondary_range_name and services_secondary_range_name to call out specific ranges in the network to pull addresses from. It appears as though I had to specify networking_mode = "VPC_NATIVE" explicitly to get things working.
Sorry for the mix-up with this! Yes, networking_mode = "VPC_NATIVE" is what we want to use going forward. It was the default when ip_allocation_policy was set before, however, we made this change in the most recent release. Unfortunately, we missed this backward-compatability in implementation. I have created a PR to address this change. Thanks for your patience!
No worries, thanks for the quick reply
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
As @jahernandezmartinez13 pointed out, just using a previous version works. But since it took me a while to find the syntax for specifying versions, I'm posting what helped me here: