modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.Terraform v0.12.24
+ provider.google v3.14.0
terraform {
required_version = ">= 0.12.14"
}
provider "google" {
project = "repro-deadbeef"
region = "us-west1"
}
variable "node_count" { default = 1 }
variable "region" { default = "us-central1" }
variable "machine_type" { default = "n1-standard-1" }
locals {
resource_labels = {
terraform = true
app = "repro"
}
labels = {
"disable-legacy-endpoints" = "true"
# This is the problem, uncomment the next line to fail the apply:
# "app.kubernetes.io/name" = "repro"
}
}
resource "google_container_cluster" "regional" {
name = "repro"
remove_default_node_pool = true
initial_node_count = var.node_count
resource_labels = local.resource_labels
location = var.region
node_config {
labels = local.labels
}
}
resource "google_container_node_pool" "poolset" {
name = "repro"
location = var.region
cluster = google_container_cluster.regional.name
node_count = 1
node_config {
preemptible = false
machine_type = "n1-standard-1"
labels = local.labels
}
}
Sorry, I'm not comfortable providing Terraform debug logs without first auditing to remove private identifiers (actual project ID, e.g.).
n/a
apply should not fail.Terraform apply fails with (reproduced for both node pool and cluster):
Error: googleapi: Error 400: Applying kubernetes label is not allowed: app.kubernetes.io/name., badRequest
Error: error creating NodePool: googleapi: Error 400: Applying kubernetes label is not allowed: app.kubernetes.io/instance,applying kubernetes label is not allowed: app.kubernetes.io/managed-by,applying kubernetes label is not allowed: app.kubernetes.io/name,applying kubernetes label is not allowed: app.kubernetes.io/version., badRequest
terraform applyApplying the same Kubernetes labels works via kubectl:
$ gcloud container clusters get-credentials repro --region us-central1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for repro.
$ kubectl get no
NAME STATUS ROLES AGE VERSION
gke-repro-repro-602d0a1f-ck8x Ready <none> 65s v1.14.10-gke.24
gke-repro-repro-b9d1d996-2g8b Ready <none> 65s v1.14.10-gke.24
gke-repro-repro-eacf75b3-z835 Ready <none> 66s v1.14.10-gke.24
$ kubectl label no gke-repro-repro-602d0a1f-ck8x app.kubernetes.io/name=repro
node/gke-repro-repro-602d0a1f-ck8x labeled
None that I have found are directly related.
@clebio GKE API is not allowing to set Kubernetes name space labels for the cluster nodeconfig. kubectl use kubernetes API thus it was able to set k8s name space labels.
I am able set custom namespace labels for the node config
locals {
resource_labels = {
terraform = true
app = "repro"
}
labels = {
"disable-legacy-endpoints" = "true"
"app.testing.io/name" = "repro"
}
}
Please let us knows if it helps
This should be documented on the google_container_cluster resource page, in that case.
labels - (Optional) The Kubernetes labels (key/value pairs) to be applied to each node.
https://www.terraform.io/docs/providers/google/r/container_cluster.html
yes, we will update the doc.
resource should document that GKE API doesn't allow kubernetes namespace (k8s.io, kubernetes.io) labels
Confirmed in my reproduction code above that something like repro.io/name deploys fine. I'm ok closing this if/when the docs get updated. Thank you for that clarification! I had not tried using gcloud vs kubectl to add the labels.
I created a GKE feature request to allow node-role.kubernetes.io labels over on the Google issue tracker. Up vote and/or add your two cents if you would like to see this implemented:
https://issuetracker.google.com/issues/157973836
can't believe a doc update took 6 months and haven't completed
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
yes, we will update the doc.
resource should document that
GKE API doesn't allow kubernetes namespace (k8s.io, kubernetes.io) labels