Terraform-provider-google: google_container_cluster network information keeps being updated on subsequent identical terraform plan

Created on 30 May 2018  ·  8Comments  ·  Source: hashicorp/terraform-provider-google

Terraform Version

```$ terraform -v
Terraform v0.11.7

  • provider.google v1.13.

### Affected Resource(s)
- google_container_cluster

### Terraform Configuration Files
```hcl
resource "google_compute_network" "vpc" {
  name                    = "test-network"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "vpc_subnet" {
  name          = "test-subnet"
  network       = "${google_compute_network.vpc.name}"
  ip_cidr_range = "192.168.14.0/24"
}

resource "google_container_cluster" "primary" {
  name = "test-cluster"

  network    = "${google_compute_network.vpc.name}"
  subnetwork = "${google_compute_subnetwork.vpc_subnet.name}"

  remove_default_node_pool = true

  node_pool = {
    name = "default-pool"
  }

  lifecycle = {
    ignore_changes = ["node_pool"]
  }
}                                                                                                                                                                                                                           

Expected Behavior

Everything being the same, subsequent terraform plan should show "No changes. Infrastructure is up-to-date."

Actual Behavior

```$ terraform plan
Terraform will perform the following actions:

~ google_container_cluster.primary
network: "projects/project-id/global/networks/test-network" => "test-network"
subnetwork: "projects/project-id/regions/europe-west1/subnetworks/test-subnet" => "test-subnet"

Plan: 0 to add, 1 to change, 0 to destroy.
```

Steps to Reproduce

  1. terraform init
  2. terraform apply
  3. terraform plan

Potential Culprit

remove_default_node_pool seems to be the culprit. Without this parameter, everything works fine.

bug upstream-terraform

Most helpful comment

Hey all, just wanted to pop in here to say that you've been heard. I've spent a fair bit of time trying to debug #988, which is almost certainly the same root cause, and all I've been able to come up with is that I _think_ it's a bug in Terraform core. I'd still love to keep working on it and trying to find a fix, but it might take a while. Hang tight!

All 8 comments

I'm observing the same behavior.

I encounter the same error with my config.(similar to the authors config)

Looks a lot like #988

I did further tests and remove_default_node_pool may not be guilty, though I'm wondering if the issue has something to do with node pools... or something else as @pdecat commented.

The following configuration also updates network configuration on subsequent terraform plan:

resource "google_compute_network" "vpc" {
  name                    = "${var.vpc_name}"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "vpc_subnet" {
  name          = "${var.vpc_subnet_name}"
  network       = "${google_compute_network.vpc.name}"
  ip_cidr_range = "${var.vpc_ip_cidr_range}"
}

resource "google_container_cluster" "primary" {
  name = "${var.cluster_name}"

  network    = "${google_compute_network.vpc.name}"
  subnetwork = "${google_compute_subnetwork.vpc_subnet.name}"

  node_pool = {
    name = "default-pool"
  }
  lifecycle = {
    ignore_changes = ["node_pool"]
  }
}

resource "google_container_node_pool" "np" {
  name = "second-pool"
  cluster = "${google_container_cluster.primary.name}"
  node_count = 1
}

The following configuration works as expected (no network information is updated):

resource "google_compute_network" "vpc" {
  name                    = "${var.vpc_name}"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "vpc_subnet" {
  name          = "${var.vpc_subnet_name}"
  network       = "${google_compute_network.vpc.name}"
  ip_cidr_range = "${var.vpc_ip_cidr_range}"
}

resource "google_container_cluster" "primary" {
  name = "${var.cluster_name}"

  network    = "${google_compute_network.vpc.name}"
  subnetwork = "${google_compute_subnetwork.vpc_subnet.name}"

  initial_node_count = 1
}

resource "google_container_node_pool" "np" {
  name = "second-pool"
  cluster = "${google_container_cluster.primary.name}"
  node_count = 1
}

In the former case, there is a node_pool argument to the google_container_cluster resource, and the issue shows up. In the latter case, there is an initial_node_count argument to the same resource, and the issue doesn't show up.

Hey all, just wanted to pop in here to say that you've been heard. I've spent a fair bit of time trying to debug #988, which is almost certainly the same root cause, and all I've been able to come up with is that I _think_ it's a bug in Terraform core. I'd still love to keep working on it and trying to find a fix, but it might take a while. Hang tight!

Closing since upstream bug is fixed in HEAD.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings