Terraform-provider-google: master_global_access_config repeatedly shows up in Terraform plan

Created on 17 Nov 2020  路  2Comments  路  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave _+1_ or _me too_ comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

$ terraform -v
Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/google v3.47.0
+ provider registry.terraform.io/hashicorp/google-beta v3.47.0

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

resource "google_container_cluster" "test-gke-cluster" {
  provider                 = google-beta
  name                     = "test-gke-cluster"
  location                 = "us-east1"
  remove_default_node_pool = true
  initial_node_count       = 1
  network                  = google_compute_network.test-vpc.name
  subnetwork               = google_compute_subnetwork.test-subnet.name
  networking_mode          = "VPC_NATIVE"
  ip_allocation_policy {
    cluster_ipv4_cidr_block  = "10.2.0.0/16"
    services_ipv4_cidr_block = "10.3.0.0/16"
  }
  master_auth {
    username = ""
    password = ""
    client_certificate_config {
      issue_client_certificate = false
    }
  }
  master_authorized_networks_config {
    cidr_blocks {
      cidr_block = "10.1.0.0/16"
    }
  }
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = true
    master_ipv4_cidr_block  = "10.4.0.0/28"
    master_global_access_config {
      enabled = false
    }
  }
}

Expected Behavior

After running terraform apply, subsequent terraform plans should show "No changes. Infrastructure is up-to-date."

Actual Behavior

Subsequent terraform plans continue to show a change to the master_global_access_config:

  # google_container_cluster.test-gke-cluster will be updated in-place
  ~ resource "google_container_cluster" "test-gke-cluster" {
        cluster_ipv4_cidr           = "10.2.0.0/16"
        default_max_pods_per_node   = 110
        enable_binary_authorization = false
        enable_intranode_visibility = false
        enable_kubernetes_alpha     = false
        enable_legacy_abac          = false
        enable_shielded_nodes       = false
        enable_tpu                  = false
        endpoint                    = "10.4.0.2"
        id                          = "projects/<REMOVED>/locations/us-east1/clusters/test-gke-cluster"
        initial_node_count          = 1
        instance_group_urls         = []
        label_fingerprint           = "a9dc16a7"
        location                    = "us-east1"
        logging_service             = "logging.googleapis.com/kubernetes"
        master_version              = "1.16.13-gke.401"
        monitoring_service          = "monitoring.googleapis.com/kubernetes"
        name                        = "test-gke-cluster"
        network                     = "projects/<REMOVED>/global/networks/test-vpc"
        networking_mode             = "VPC_NATIVE"
        node_locations              = [
            "us-east1-b",
            "us-east1-c",
            "us-east1-d",
        ]
        node_version                = "1.16.13-gke.401"
        project                     = "<REMOVED>"
        remove_default_node_pool    = true
        resource_labels             = {}
        self_link                   = "https://container.googleapis.com/v1beta1/projects/<REMOVED>/locations/us-east1/clusters/test-gke-cluster"
        services_ipv4_cidr          = "10.3.0.0/16"
        subnetwork                  = "projects/<REMOVED>/regions/us-east1/subnetworks/test-subnet"

        addons_config {

            network_policy_config {
                disabled = true
            }
        }

        cluster_autoscaling {
            autoscaling_profile = "BALANCED"
            enabled             = false
        }

        cluster_telemetry {
            type = "ENABLED"
        }

        database_encryption {
            state = "DECRYPTED"
        }

        default_snat_status {
            disabled = false
        }

        ip_allocation_policy {
            cluster_ipv4_cidr_block       = "10.2.0.0/16"
            cluster_secondary_range_name  = "gke-test-gke-cluster-pods-67809078"
            services_ipv4_cidr_block      = "10.3.0.0/16"
            services_secondary_range_name = "gke-test-gke-cluster-services-67809078"
        }

        master_auth {
            cluster_ca_certificate= "<REDACTED>"

            client_certificate_config {
                issue_client_certificate = false
            }
        }

        master_authorized_networks_config {
            cidr_blocks {
                cidr_block = "10.1.0.0/16"
            }
        }

        network_policy {
            enabled  = false
            provider = "PROVIDER_UNSPECIFIED"
        }

        notification_config {
            pubsub {
                enabled = false
            }
        }

        pod_security_policy_config {
            enabled = false
        }

      ~ private_cluster_config {
            enable_private_endpoint = true
            enable_private_nodes    = true
            master_ipv4_cidr_block  = "10.4.0.0/28"
            peering_name            = "<REMOVED>"
            private_endpoint        = "10.4.0.2"
            public_endpoint         = "<REMOVED>"

          + master_global_access_config {
              + enabled = false
            }
        }

        release_channel {
            channel = "UNSPECIFIED"
        }
    }

Steps to Reproduce

  1. terraform apply
  2. terraform plan
  3. See "1 to change" in the new plan.
  4. Go to step 2.

References

  • I believe this may be the same issue, but reported on the wrong project
bug

Most helpful comment

Temp workaround for me was to ignore its changes.

  lifecycle {
    ignore_changes = [private_cluster_config[0].master_global_access_config]
  }

All 2 comments

Interesting masterGlobalAccessConfig was sent in the request but not included in the response

https://paste.googleplex.com/5481982390697984

Temp workaround for me was to ignore its changes.

  lifecycle {
    ignore_changes = [private_cluster_config[0].master_global_access_config]
  }
Was this page helpful?
0 / 5 - 0 ratings