modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.Terraform v0.12.28
+ provider.google v3.29.0
+ provider.google-beta v3.29.0
resource "google_container_cluster" "cluster" {
provider = google-beta
name = var.cluster_name
location = var.region
node_locations = var.zones
network = google_compute_network.gke.self_link
subnetwork = google_compute_subnetwork.gke.self_link
ip_allocation_policy {}
initial_node_count = 1
# (+some other irrelevant settings)
release_channel {
channel = "REGULAR"
}
}
Cluster should be not be replaced without changing any of its settings.
Cluster gets replaced every time, with the reason given
+ ip_allocation_policy { # forces replacement
+ cluster_ipv4_cidr_block = (known after apply)
+ cluster_secondary_range_name = (known after apply)
+ node_ipv4_cidr_block = (known after apply)
+ services_ipv4_cidr_block = (known after apply)
+ services_secondary_range_name = (known after apply)
+ subnetwork_name = (known after apply)
}
terraform planThis is only happening for a newly deployed cluster. We have an identical cluster deployed in another project from the same config a few Terraform versions ago, and that's not affected.
This makes me think that this is possibly a new bug, where the provider no longer stores ip_allocation_policy state for new clusters, hence requiring to re-add it each time.
Additionally, I tried hard-coding
ip_allocation_policy {
cluster_ipv4_cidr_block = "10.24.0.0/14"
services_ipv4_cidr_block = "10.219.0.0/20"
}
from the "healthy" state, and confirmed that it is not stored in the new state either:
"ip_allocation_policy": [],
The only other workaround for now seems to be to do surgery on the state..
Hi @dinvlad! Sorry for the inconvenience. In the last release we added an attribute networking_mode. networking_mode can be either "VPC_NATIVE" or "ROUTES" and if it's not set it will default to what is returned by the API. In the past, if ip_allocation_policy showed up in the config, we would essentially default it to be VPC_NATIVE, however, with this change the user is allowed to set it and we missed the backward compatibility (This should be fixed with https://github.com/terraform-providers/terraform-provider-google-beta/pull/2260)
That being said, I'm guessing you were trying to create a vpc-native cluster, but instead, since ip_allocation_policy was empty, and networking_mode wasn't set, a routes-based cluster was created, so you likely will want to re-create the cluster. You will want to set the networking mode = "VPC_NATIVE" and that should fix it.
Again, I'm sorry for the inconvenience, please let me know if I can help in any other way. Thanks!
I see - thanks for the explanation!
So it seems like we should just set this?
networking_mode = "VPC_NATIVE"
ip_allocation_policy {}
I can see in my "healthy" state "networking_mode": "VPC_NATIVE", already, so this config makes no change there. It will re-create the "new" cluster however, as expected.
Hi @dinvlad! Thank you for your understanding! Yup, that sounds right!
If I'm understanding correctly, your "healthy" state cluster looks correct in the console and the plan shows no change when you add networking_mode = "VPC_NATIVE", and the "new" cluster refers to a separate cluster that is currently routes-based and you wanted to be vpc native, and when you add networking_mode = "VPC_NATIVE" the plan shows the re-creation (which as you said, is expected) and is what you're looking for.
If I understood you correctly, then yes, you should be good to go! I'll close this issue, but please feel free to re-open if I misunderstood and you're still running into issues.
Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
Hi @dinvlad! Thank you for your understanding! Yup, that sounds right!
If I'm understanding correctly, your "healthy" state cluster looks correct in the console and the plan shows no change when you add
networking_mode = "VPC_NATIVE", and the "new" cluster refers to a separate cluster that is currently routes-based and you wanted to be vpc native, and when you addnetworking_mode = "VPC_NATIVE"the plan shows the re-creation (which as you said, is expected) and is what you're looking for.If I understood you correctly, then yes, you should be good to go! I'll close this issue, but please feel free to re-open if I misunderstood and you're still running into issues.
Thanks!