Terraform-provider-google: Support labels for instances in node pool

Created on 12 Jul 2018  路  12Comments  路  Source: hashicorp/terraform-provider-google

Terraform Version

Terraform v0.11.7
+ provider.google v1.15.0
+ provider.kubernetes v1.1.0

Affected Resource(s)

Please list the resources as a list, for example:

  • google_container_node_pool

Terraform Configuration Files

resource "google_container_node_pool" "my_pool" {
  name       = "my-pool"
  cluster    = "my-cluster"
  node_count = "2"

  node_config {
    labels {
      "foo" = "bar"
    }
  }
}

We can specify Kubernetes labels for nodes in google_container_node_pool (using node_config.labels) but it's not possible to set labels for the instances themselves (similarly to google_compute_instance .labels).

enhancement upstream

Most helpful comment

Not a stupid question! As @dwradcliffe mentions, Terraform operates at the level of the node pool, and doesn't see/think about/control the nodes in the pool--that's GKE's area of responsibility. Node pools can also scale at any time, which means the nodes in question may change without Terraform running, and any new nodes wouldn't have the labels on them.

All 12 comments

Google does not currently provide any way to do this. I have requested this feature but it has not been done yet.

It's probably a stupid question but why it's not possible if gcloud compute instances add-labels can do it?

Since the instances are managed by the node pool (and the underlying managed instance groups), you can't make changes directly to the instances (via the method you mention). Instead we must modify the node pool itself, and there's no API for setting resource labels on the node pool. 馃槥

Not a stupid question! As @dwradcliffe mentions, Terraform operates at the level of the node pool, and doesn't see/think about/control the nodes in the pool--that's GKE's area of responsibility. Node pools can also scale at any time, which means the nodes in question may change without Terraform running, and any new nodes wouldn't have the labels on them.

Although the docs don't mention it, this seems to work for me in practice. I'm using:

Terraform v0.11.8
+ provider.google v1.19.0

It appears to support taints as well.

@ianrose14 can you share the code you used to set labels and taints that end up on the node instances?

Here's one example

resource "google_container_node_pool" "name" {
  name       = "my-pool"
  cluster    = "${google_container_cluster.primary.name}"
  zone       = "${var.primary_zone}"
  node_count = 1

  autoscaling {
    min_node_count = 1
    max_node_count = 4
  }

  node_config {
    labels = {
      "cluster"                     = "${var.cluster_node_label}"
      "k8s.fullstory.com/node-task" = "my-value"
    }

    machine_type = "n1-standard-4"
    oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
    preemptible  = "true"

    taint = {
      effect = "NO_SCHEDULE"
      key    = "fs-node-use"
      value  = "my-value"
    }
  }
}

If the goal is to labelled all the Nodes/VMs for that particular cluster, I normally set it up under "google_container_cluster" resource by defining the resource_labels argument. Not sure if you want to take this approach but for me this one works just like what I wanted (having custom labels for all my cluster nodes). Here's a piece of code example that I normally use :

```
resource "google_container_cluster" "primary" {
name = "k8s-${terraform.workspace}-cluster"
zone = "${var.region}-a"
enable_legacy_abac = false
remove_default_node_pool = true
node_pool {
name = "default-pool"
}
resource_labels = "${var.resource_labels}"
}

Labeling for billing purposes

variable "resource_labels" {
default = {
environment = "development"
maintainer = "[email protected]"
}
description = "Kubernetes cluster-wide resource labels"
}

Here's one example

resource "google_container_node_pool" "name" {
  name       = "my-pool"
  cluster    = "${google_container_cluster.primary.name}"
  zone       = "${var.primary_zone}"
  node_count = 1

  autoscaling {
    min_node_count = 1
    max_node_count = 4
  }

  node_config {
    labels = {
      "cluster"                     = "${var.cluster_node_label}"
      "k8s.fullstory.com/node-task" = "my-value"
    }

    machine_type = "n1-standard-4"
    oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
    preemptible  = "true"

    taint = {
      effect = "NO_SCHEDULE"
      key    = "fs-node-use"
      value  = "my-value"
    }
  }
}

This can probably be closed right? labels under node_config works.
applying resource labels at google_container_cluster does NOT work- it will not apply those labels to the GCE instances

Well... I tested this just a while ago and I must say that adding resource_labels argument to google_container_cluster DOES work for me. I was also skeptical at first and hesitated when I tried this method last year (although the TF docs which I read says it very clearly) here: https://www.terraform.io/docs/providers/google/r/container_cluster.html#resource_labels , however, I went ahead anyway and gave it a test.
Long story to short, I'm glad that I took that route since I eventually learned something about how google_container_cluster and that particular argument (resource_labels) works.

The very thing which I learned was, the (cluster-wide) label changes are not instantaneous! It took about roughly 30~45mins for me to see the changes propagated and reflected on ALL our GCE instances (nodes). If you have several node pools and you want to make one time shot of applying labels to the pools then this will make things easier for you (assuming that you have that level of patience required).

So...perhaps you could give this another try (by sparing some extra time to get the changes fully reflected across all the nodes) and share the result here @red8888 ? I'm sure your experience in testing this technique will benefit us all.

I also post some of the cli stdouts which I performed during the test where I added 'maintainer=Martin' as a resource_labels to my TF' vars file:

$ terraform -v
Terraform v0.12.1
+ provider.google v2.7.0
$ terraform plan
Terraform will perform the following actions:

  # module.gke.google_container_cluster.primary will be updated in-place
  ~ resource "google_container_cluster" "primary" {
        additional_zones         = []
        cluster_autoscaling      = []
        cluster_ipv4_cidr        = "10.20.0.0/14"
        ...
        master_version           = "1.12.7-gke.17"
        min_master_version       = "1.12.7-gke.17"
        monitoring_service       = "monitoring.googleapis.com"
        name                     = "k8s-dev-cluster"
        node_locations           = []
        node_version             = "1.12.7-gke.17"
        project                  = "xxx"
        remove_default_node_pool = true
      ~ resource_labels          = {
            "env"        = "staging"
          + "maintainer" = "martin"
            "resource"   = "gke"
        }
        ...



md5-6aa603ce0dc5f67e0194e7d7e3a9705d



$ gcloud compute instances describe <one of your nodes> --zone=<the gke zone> | head --lines=+50

canIpForward: true
cpuPlatform: Intel Broadwell
creationTimestamp: '2019-06-11T05:45:50.194-07:00'
deletionProtection: false
disks:
...
id: 'xxx'
kind: compute#instance
labelFingerprint: ptUBrXARDBY=
labels:
  env: staging
  goog-gke-node: ''
  maintainer: martin    <<<<<<<<<<
  resource: gke
...

Lastly, I've tested this and the resource argument worked as it should on both preemptible and non-preemptible instance type.

I'm curious to know why this is not supported yet with google_container_node_pool when the other similar module https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/9.1.0 supports this.

E.g. see the example at https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/v9.1.0/modules/beta-public-cluster

I would rather use the resource google_container_node_pool directly than the other module, because this is much more flexible.

@talonx: Those are Kubernetes labels as specified with node_config.labels and not GCP labels, which are not currently able to be set on node pools.

Was this page helpful?
0 / 5 - 0 ratings