Terraform-provider-google: google_bigtable_instance force replacement of development instance_type

Created on 25 Jan 2020  ·  13Comments  ·  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave _+1_ or _me too_ comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

terraform -v
Terraform v0.12.20
+ provider.google v2.17.0
+ provider.google-beta v2.14.0

Affected Resource(s)

  • google_bigtable_instance

Terraform Configuration Files

resource "google_bigtable_instance" "development-instance" {
  name          = "tf-instance"
  instance_type = "DEVELOPMENT"

  cluster {
    cluster_id   = "tf-instance-cluster"
    zone         = "us-central1-b"
    storage_type = "HDD"
  }
}

Debug Output


https://gist.github.com/cynful/442a4274bfe06a295302f7966311bfd1

Panic Output

Expected Behavior


There should be no changes in the plan. This should be a no-op.
Nothing in the resource has been changed between creation and re-plan

Actual Behavior


The plan does not recognize the state of the 'DEVELOPMENT' num_nodes (which in console creation is 1)
and is let unset (via terraform, as per documention)
However validation forces this update of num_nodes to at least 3

Steps to Reproduce


create a googe_bigtable_instance in developlment, leave the number of nodes unset

  1. terraform apply
  2. 'terraform plan`
    the plan will try to change after the apply, which should not happen

Important Factoids

References

https://github.com/GoogleCloudPlatform/magic-modules/pull/679

bug

Most helpful comment

Thank you so so much @danawillow -- this was really killing us.

Version 2.20.2 Published 5 hours ago

All 13 comments

@cynful as long num_nodes is unset you don't get any error message about num_nodes.
Can you please confirm that when you are changing the instance_type from PRODUCTION to DEVELOPMENT that you are removing the num_nodes entry in the config.
the below config created Dev cluster successfully.

resource "google_bigtable_instance" "development-instance" {
  name          = "tf-instance"
  instance_type = "DEVELOPMENT"

  cluster {
    cluster_id   = "tf-instance-cluster"
    zone         = "us-central1-b"
    storage_type = "HDD"
  }
}

@venkykuberan I did not re-assign it to PRODUCTION.
I created a DEVELOPMENT instance. I would like it to keep it as is.
However, when you do a terraform plan in the same state, you get a plan that suggests that you're changing instance types, and that the number of nodes needs to be increased.

We've done this two different ways, we've tested without num_nodes set

We see different behavior with 2.17 provider (in which case we hadn't changed anything on our end other than updating from terraform 0.12.19 to 0.12.20), and things had been stable for a while, so maybe an API change on Google's end?). With 2.17, it wanted to change the instance from development to production, with 2.20.1, even if we removed the state item and re-imported the instance, it would want to change the number of nodes from 1 -> 0 (maybe related to https://github.com/terraform-providers/terraform-provider-google/commit/2cee1932d0a7019033e1046f30afae2e7f721738 according to @rileykarson), but then not actually be able to do this:

2020/01/24 11:04:51 [DEBUG] module.bigtable-instance-foo.google_bigtable_instance.this: applying the planned Update change
2020/01/24 11:04:57 [DEBUG] module.bigtable-instance-foo.google_bigtable_instance.this: apply errored, but we're indicating that via the Error pointer rather than returning it: Error updating cluster search for instance foo
2020/01/24 11:04:57 [ERROR] module.bigtable-instance-foo: eval: *terraform.EvalApplyPost, err: Error updating cluster search for instance foo
2020/01/24 11:04:57 [ERROR] module.bigtable-instance-foo: eval: *terraform.EvalSequence, err: Error updating cluster search for instance foo

creating a new instance (with 2.20.1):

resource "google_bigtable_instance" "development-instance" {
  name          = "tf-instance"
  instance_type = "DEVELOPMENT"

  cluster {
    cluster_id   = "tf-instance-cluster"
    zone         = "us-central1-b"
    storage_type = "HDD"
  }
}
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_bigtable_instance.development-instance will be created
  + resource "google_bigtable_instance" "development-instance" {
      + cluster_id    = (known after apply)
      + display_name  = (known after apply)
      + id            = (known after apply)
      + instance_type = "DEVELOPMENT"
      + name          = "tf-instance"
      + num_nodes     = (known after apply)
      + project       = (known after apply)
      + storage_type  = (known after apply)
      + zone          = (known after apply)

      + cluster {
          + cluster_id   = "tf-instance-cluster"
          + storage_type = "HDD"
          + zone         = "us-central1-b"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
% tf state show google_bigtable_instance.development-instance
# google_bigtable_instance.development-instance:
resource "google_bigtable_instance" "development-instance" {
    display_name  = "tf-instance"
    id            = "tf-instance"
    instance_type = "DEVELOPMENT"
    name          = "tf-instance"
    project       = "foo"

    cluster {
        cluster_id   = "tf-instance-cluster"
        num_nodes    = 1
        storage_type = "HDD"
        zone         = "us-central1-b"
    }
}

however, the next plan shows:

  # google_bigtable_instance.development-instance will be updated in-place
  ~ resource "google_bigtable_instance" "development-instance" {
        display_name  = "tf-instance"
        id            = "tf-instance"
        instance_type = "DEVELOPMENT"
        name          = "tf-instance"
        project       = "foo"

      ~ cluster {
            cluster_id   = "tf-instance-cluster"
          ~ num_nodes    = 1 -> 0
            storage_type = "HDD"
            zone         = "us-central1-b"
        }
    }

same here

Terraform will perform the following actions:

  # google_bigtable_instance.hddbta-prd-zone will be updated in-place
  ~ resource "google_bigtable_instance" "hddbta-prd-zone" {
        display_name  = "hddbta-prd-zone"
        id            = "hddbta-prd-zone"
        instance_type = "DEVELOPMENT"
        name          = "hddbta-prd-zone"
        project       = "prd"

      ~ cluster {
            cluster_id   = "hddbta-cluster-west1b"
          ~ num_nodes    = 1 -> 0
            storage_type = "HDD"
            zone         = "us-west1-b"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
terraform version
Terraform v0.12.20
+ provider.aws v2.46.0
+ provider.google v2.20.1
+ provider.google-beta v2.20.1
+ provider.helm v0.10.4
+ provider.kubernetes v1.10.0
+ provider.local v1.4.0
+ provider.template v2.1.2

This was/is already a Development instance

Also coming here for same reason .... annoying regression.

This is indeed an API change that caused this and not a provider one. I'm going to work on a fix now on our side that lets Terraform still behave correctly. In the meantime, if you're affected by this, please make sure you're upgraded to the latest version of the Google provider so that you can get the fix once it appears in a release.

Wow, thanks for the quick fix @danawillow ! How long does it take for this merged fix to be available to us?

If all goes according to plan, it'll be released on Monday as part of 3.7.0.
Some more potential good news- the fix merged cleanly into the 2.X series, and since the resource is effectively unusable without the fix for affected DEVELOPMENT instances, we might be doing a 2.20.2 with it as well.

@danawillow would be really, really appreciated if we can get this into 2.20.2 - we reported the issue originally, and won't be jumping to 3.x for a bit longer.

Thank you so so much @danawillow -- this was really killing us.

Version 2.20.2 Published 5 hours ago

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings