Terraform-provider-google: Computed properties always show as changed and forces new resource

Created on 19 Mar 2019  ·  3Comments  ·  Source: hashicorp/terraform-provider-google

_This issue was originally opened by @bogdancondurache as hashicorp/terraform#20724. It was migrated here as a result of the provider split. The original body of the issue is below._


Terraform Version

Terraform v0.11.13

Terraform Configuration Files


The backend config:

terraform {
  backend "gcs" {
    bucket  = "..."
    prefix  = "terraform/state"
  }
}

The main config:

data "terraform_remote_state" "remote_state" {
  backend = "gcs"
  config = {
    bucket  = "..."
    prefix  = "terraform/state"
  }
}

resource "google_container_cluster" "gke-cluster" {
  name               = "main-cluster"
  network            = "default"
  zone               = "europe-west1-b"

  remove_default_node_pool = true
  initial_node_count = 1

  # Setting an empty username and password explicitly disables basic auth
  master_auth {
    username = ""
    password = ""
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "pool"
  zone       = "${google_container_cluster.gke-cluster.zone}"
  cluster    = "${google_container_cluster.gke-cluster.name}"
  node_count = 3

  node_config {
    preemptible  = true
    machine_type = "f1-micro"
  }
}

It's basically the one from the tutorial page.

Debug Output


Not the case.

Crash Output


Not the case.

Expected Behavior


After running the apply command no other changes should have been shown at the plan command.

Actual Behavior


plan shows a lot of changes and wants to destroy the existing cluster to create a new one. Examples of diffs displayed by plan after just running apply:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

-/+ google_container_cluster.gke-cluster (new resource required)
      id:                                    "main-cluster" => <computed> (forces new resource)
...
node_config.#:                         "0" => "1" (forces new resource)
      node_config.0.disk_size_gb:            "" => <computed>
      node_config.0.disk_type:               "" => <computed>
      node_config.0.guest_accelerator.#:     "" => <computed> (forces new resource)
      node_config.0.image_type:              "" => <computed>
      node_config.0.local_ssd_count:         "" => <computed>
      node_config.0.machine_type:            "" => <computed>
      node_config.0.oauth_scopes.#:          "" => "4" (forces new resource)
      node_config.0.preemptible:             "" => "false" (forces new resource)

Steps to Reproduce

  • terraform init
  • terraform apply
  • terraform plan
    With the mentioned config files.
  • Additional Context


    I am running Terraform locally, on a Mac OS X, initially the state was kept locally (not sure if the problem was happening, I believe not) and then I moved the state to GCS. The state seems to be properly updated (for example after terraform refresh) so there are no issues writing the file. I took most of the config from the official documentation. I tried applying the changes 4 times (which always resulted in deleting the old cluster and creating a new one) and the problem still seems to appear in the exact same conditions.

    References


    Seems to be related to:

    • hashicorp/terraform#5233
    • hashicorp/terraform#20492
    bug

    Most helpful comment

    Ah, this is (somewhat) expected. The API makes no guarantees about this field and explicitly defined node_pool objects (whether inline in cluster or separately defiend) working together, and the Terraform docs were wrong. https://github.com/GoogleCloudPlatform/magic-modules/pull/1526 should correct them, and the updated documentation will appear when we make the next provider release.

    Unfortunately, Terraform is unable to make x-resource assertions so we can't show an error or anything when using node_config along with a google_container_node_pool resource.

    All 3 comments

    Ah, this is (somewhat) expected. The API makes no guarantees about this field and explicitly defined node_pool objects (whether inline in cluster or separately defiend) working together, and the Terraform docs were wrong. https://github.com/GoogleCloudPlatform/magic-modules/pull/1526 should correct them, and the updated documentation will appear when we make the next provider release.

    Unfortunately, Terraform is unable to make x-resource assertions so we can't show an error or anything when using node_config along with a google_container_node_pool resource.

    @rileykarson Thank you very much! Problem solved.

    I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

    If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

    Was this page helpful?
    0 / 5 - 0 ratings