Terraform-provider-google: GKE cluster and node pools fail when using valid Kubernetes labels

Created on 28 Mar 2020  ·  7Comments  ·  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave _+1_ or _me too_ comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.12.24
+ provider.google v3.14.0

Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Terraform Configuration Files

terraform {
  required_version = ">= 0.12.14"
}

provider "google" {
  project = "repro-deadbeef"
  region  = "us-west1"
}

variable "node_count" { default = 1 }
variable "region" { default = "us-central1" }
variable "machine_type" { default = "n1-standard-1" }

locals {
  resource_labels = {
    terraform = true
    app       = "repro"
  }

  labels = {
    "disable-legacy-endpoints" = "true"
    # This is the problem, uncomment the next line to fail the apply:
    # "app.kubernetes.io/name"       = "repro"
  }
}

resource "google_container_cluster" "regional" {
  name                     = "repro"
  remove_default_node_pool = true
  initial_node_count       = var.node_count
  resource_labels          = local.resource_labels
  location                 = var.region

  node_config {
    labels      = local.labels
  }
}

resource "google_container_node_pool" "poolset" {
  name       = "repro"
  location   = var.region
  cluster    = google_container_cluster.regional.name
  node_count = 1

  node_config {
    preemptible  = false
    machine_type = "n1-standard-1"
    labels       = local.labels
  }
}

Debug Output


Sorry, I'm not comfortable providing Terraform debug logs without first auditing to remove private identifiers (actual project ID, e.g.).

Panic Output

n/a

Expected Behavior

  1. Kubernetes labels should be applied successfully.
  2. Terraform apply should not fail.

Actual Behavior

Terraform apply fails with (reproduced for both node pool and cluster):

  • Cluster:
Error: googleapi: Error 400: Applying kubernetes label is not allowed: app.kubernetes.io/name., badRequest
  • node pool:
Error: error creating NodePool: googleapi: Error 400: Applying kubernetes label is not allowed: app.kubernetes.io/instance,applying kubernetes label is not allowed: app.kubernetes.io/managed-by,applying kubernetes label is not allowed: app.kubernetes.io/name,applying kubernetes label is not allowed: app.kubernetes.io/version., badRequest

Steps to Reproduce

  1. terraform apply

Important Factoids

Applying the same Kubernetes labels works via kubectl:

$ gcloud container clusters get-credentials repro --region us-central1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for repro.

$ kubectl get no
NAME                            STATUS   ROLES    AGE   VERSION
gke-repro-repro-602d0a1f-ck8x   Ready    <none>   65s   v1.14.10-gke.24
gke-repro-repro-b9d1d996-2g8b   Ready    <none>   65s   v1.14.10-gke.24
gke-repro-repro-eacf75b3-z835   Ready    <none>   66s   v1.14.10-gke.24

$ kubectl label no gke-repro-repro-602d0a1f-ck8x app.kubernetes.io/name=repro
node/gke-repro-repro-602d0a1f-ck8x labeled

References

None that I have found are directly related.

documentation sizXS

Most helpful comment

yes, we will update the doc.

resource should document that GKE API doesn't allow kubernetes namespace (k8s.io, kubernetes.io) labels

All 7 comments

@clebio GKE API is not allowing to set Kubernetes name space labels for the cluster nodeconfig. kubectl use kubernetes API thus it was able to set k8s name space labels.

I am able set custom namespace labels for the node config

locals {
  resource_labels = {
    terraform = true
    app       = "repro"
  }

  labels = {
    "disable-legacy-endpoints" = "true"
      "app.testing.io/name" = "repro"
  }
}

Please let us knows if it helps

This should be documented on the google_container_cluster resource page, in that case.

labels - (Optional) The Kubernetes labels (key/value pairs) to be applied to each node.
https://www.terraform.io/docs/providers/google/r/container_cluster.html

yes, we will update the doc.

resource should document that GKE API doesn't allow kubernetes namespace (k8s.io, kubernetes.io) labels

Confirmed in my reproduction code above that something like repro.io/name deploys fine. I'm ok closing this if/when the docs get updated. Thank you for that clarification! I had not tried using gcloud vs kubectl to add the labels.

I created a GKE feature request to allow node-role.kubernetes.io labels over on the Google issue tracker. Up vote and/or add your two cents if you would like to see this implemented:
https://issuetracker.google.com/issues/157973836

can't believe a doc update took 6 months and haven't completed

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings