Terraform-provider-google: google_container_node_pool in "forces new resource" loop

Created on 27 May 2019  ·  3Comments  ·  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.11.13

  • provider.google v2.7.0
  • provider.random v2.1.2

Affected Resource(s)

  • google_container_node_pool

Terraform Configuration Files

resource "google_container_cluster" "production-002" {
  name                     = "production-002"
  location                 = "europe-west1"
  project                  = "${google_project.production-001.id}"
  initial_node_count       = 3
  remove_default_node_pool = true

  logging_service    = "none"
  monitoring_service = "none"

  maintenance_policy {
    daily_maintenance_window {
      start_time = "03:00"
    }
  }
}

resource "google_container_node_pool" "support-1" {
  name     = "support-1"
  cluster  = "${google_container_cluster.production-002.name}"
  project  = "${google_project.production-001.id}"
  location = "${google_container_cluster.production-002.location}"

  node_count = 1

  management {
    auto_upgrade = false
    auto_repair  = false
  }

  node_config {
    machine_type = "n1-highmem-2"
    disk_size_gb = 80
    disk_type    = "pd-ssd"
    image_type   = "COS"

    labels {
      workload = "support"
    }

    metadata {
      tier = "logic"
    }

    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
}

Debug Output

-/+ google_container_node_pool.support-1 (new resource required)
      id:                                              "europe-west1/production-002/support-1" => <computed> (forces new resource)
      cluster:                                         "production-002" => "production-002"
      initial_node_count:                              "1" => <computed>
      instance_group_urls.#:                           "3" => <computed>
      location:                                        "europe-west1" => "europe-west1"
      management.#:                                    "1" => "1"
      management.0.auto_repair:                        "false" => "false"
      management.0.auto_upgrade:                       "false" => "false"
      max_pods_per_node:                               "" => <computed>
      name:                                            "support-1" => "support-1"
      name_prefix:                                     "" => <computed>
      node_config.#:                                   "1" => "1"
      node_config.0.disk_size_gb:                      "80" => "80"
      node_config.0.disk_type:                         "pd-ssd" => "pd-ssd"
      node_config.0.guest_accelerator.#:               "0" => <computed>
      node_config.0.image_type:                        "COS" => "COS"
      node_config.0.labels.%:                          "1" => "1"
      node_config.0.labels.workload:                   "support" => "support"
      node_config.0.local_ssd_count:                   "0" => <computed>
      node_config.0.machine_type:                      "n1-highmem-2" => "n1-highmem-2"
      node_config.0.metadata.%:                        "2" => "1" (forces new resource)
      node_config.0.metadata.disable-legacy-endpoints: "true" => "" (forces new resource)
      node_config.0.metadata.tier:                     "logic" => "logic"
      node_config.0.oauth_scopes.#:                    "4" => "4"
      node_config.0.oauth_scopes.1277378754:           "https://www.googleapis.com/auth/monitoring" => "https://www.googleapis.com/auth/monitoring"
      node_config.0.oauth_scopes.1632638332:           "https://www.googleapis.com/auth/devstorage.read_only" => "https://www.googleapis.com/auth/devstorage.read_only"
      node_config.0.oauth_scopes.172152165:            "https://www.googleapis.com/auth/logging.write" => "https://www.googleapis.com/auth/logging.write"
      node_config.0.oauth_scopes.299962681:            "https://www.googleapis.com/auth/compute" => "https://www.googleapis.com/auth/compute"
      node_config.0.preemptible:                       "false" => "false"
      node_config.0.service_account:                   "default" => <computed>
      node_count:                                      "1" => "1"
      project:                                         "prod-312kjl13" => "prod-312kjl13"
      region:                                          "europe-west1" => <computed>
      version:                                         "1.13.6-gke.0" => <computed>
      zone:                                            "" => <computed>

attributes forcing new resource being:

      id:                                              "europe-west1/production-002/support-1" => <computed> (forces new resource)
      node_config.0.metadata.%:                        "2" => "1" (forces new resource)
      node_config.0.metadata.disable-legacy-endpoints: "true" => "" (forces new resource)

Expected Behavior

No changes.

Actual Behavior

Forces a new node pool every single time

Steps to Reproduce

  1. terraform apply
  • #0000
bug

Most helpful comment

What if you add

  node_config {
    metadata {
      disable-legacy-endpoints = "true"
    }

to your config?

From the docs on google_container_cluster:

From GKE 1.12 onwards, disable-legacy-endpoints is set to true by the API; if metadata is set but that default value is not included, Terraform will attempt to unset the value. To avoid this, set the value in your config.

All 3 comments

What if you add

  node_config {
    metadata {
      disable-legacy-endpoints = "true"
    }

to your config?

From the docs on google_container_cluster:

From GKE 1.12 onwards, disable-legacy-endpoints is set to true by the API; if metadata is set but that default value is not included, Terraform will attempt to unset the value. To avoid this, set the value in your config.

That fixed it! :-)

TYVM ❤️

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings