Terraform-provider-google: complete fresh cloudsql instance recreated

Created on 20 Nov 2018  ·  11Comments  ·  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.11.10
+ provider.google v1.19.1

Affected Resource(s)

  • google_sql_database_instance

Terraform Configuration Files

resource "google_sql_user" "capisqluser" {
  name     = "dbcapi"
  instance = "${google_sql_database_instance.capisql.name}"
  host     = ""
  password = "<<redacted>>"
  project  = "bolcom-pro-capi-af8"
}
resource "google_sql_database_instance" "capisql" {
  name = "capisql"
  database_version = "POSTGRES_9_6"
  region = "europe-west4"
  project = "bolcom-pro-capi-af8"

  settings {
    tier = "db-custom-2-4096"
    activation_policy = "ALWAYS"
    disk_autoresize = "true"
    disk_size = "20"
    disk_type = "PD_HDD"
    user_labels = {
      bol-opex = "team1e"
      bol-data-sensitivity = "low"
      pii_data = "false"
    }
    backup_configuration {
      enabled = "true"
      start_time = "03:00"
    }
    maintenance_window {
      day = "7"
      hour = "3"
      update_track = "stable"
    }
  }

  lifecycle {
    ignore_changes  = [
      "settings.0.disk_size"
    ]
  }
}

resource "google_sql_database" "capisql-capi" {
  name     = "capi"
  instance = "${google_sql_database_instance.capisql.name}"
  project  = "bolcom-pro-capi-af8"
}

provider "google" {
  credentials = "${file("/etc/gcp/cloudsql-manager")}"
  project     = "bolcom-pro-capi-af8"
  region      = "europe-west4"
}

The tier was changed from tier = "db-custom-1-4096" to tier = "db-custom-2-4096"

Debug Output

See the output at https://gist.github.com/wmuizelaar/e3103004112a0474b103f63400ff1480 for the plan + apply result.

Panic Output

N/A

Expected Behavior

The cloudsql instance would be restarted with the updated configuration.

Actual Behavior

The cloudsql instance capisql was deleted, and a new instance with the name terraform-20181120142422807600000001 was created.

Important Factoids

The tfstate-file and its backup can be found here:
https://gist.github.com/wmuizelaar/75cadaa126b8294ad90a898ef64dcb18

bug

Most helpful comment

I was able to reproduce this issue and similar issues around ignore_changes in older versions of terraform core. However, it seems that it correctly ignores changes to disk_size in terraform core version 0.12.5.

I would recommend anyone encountering this issue to upgrade their version of terraform core to 0.12.5+ to fix this.

I will investigate possible ways to either prevent destroy on disk shrink or warn about a recreation, but there aren't many options from the provider side.

All 11 comments

That's weird that plan was to create the resource, but applying modified it.

But I suspect that the reason is

settings.0.disk_size:                         "25" => "20"

Is it possible to decrease disk size?

No, that shouldn't be possible. Thats something that I ran into earlier, and therefore added the disk_size to lifecycle_changes. Normally it out also state that it will recreate stuff, but this wasn't the case.

My suspicion was that the refresh during the plan-phase wasn't working perfect for some reason. I have the (binary) "terraform.plan" plan file available as well, if that can help debugging.

Would it be possible to get the debug output for the plan or apply that shows the diff? I think this may be a bug related to our diff customization overriding the lifecycle.ignore_changes.

I'm afraid I don't have that - because this was happening to a database where the team needed to have their database available again. So I copied the whole directory, saved the output, deleted all the state + files, and the generated database in GCP. Then ran our tooling again to recreate the desired database, and afterwards started filing this issue.

Would it be possible to generate this debug-output based on the statefiles and plan I have?

I've reproduced this. From what I can tell the problems only occur when another parameter from the settings block is changed together with the disk size.
When only the disk_size is reduced, the change is ignored in accordance with the lifecycle ignore_changes block.
If (for example) the tier is changed as well Terraform seems to ignore the lifecycle block and decides to go for create instead of update in-place.

mydb100.tf: https://gist.github.com/bastinda/1adf5af6e290df26d337536a050a3fd5

Changed the tier from db-custom-2-6144 to db-custom-4-6144 and disk_size from 15 to 10.

Terraform plan debug output: https://gist.github.com/bastinda/1d552a1dbdce669854cc127cc2992a46

Friendly ping for @paddycarver - can you check if the debug-output delivered by @bastinda is helpful?

I just got bit by this as well when trying to upgrade my CloudSQL instance to highly available. Luckily this only hit a staging instance, but it's definitely misleading, especially since nowhere in the Terraform output mentions that the instance will be destroyed.

Since this can potentially cause data loss, I'd suggest that this bug should be marked as critical.

Looks like there are some weird interactions between CustomizeDiff and ignore_changes

I'll try to reproduce on another resource and see what we can do to mitigate this

I was able to reproduce this issue and similar issues around ignore_changes in older versions of terraform core. However, it seems that it correctly ignores changes to disk_size in terraform core version 0.12.5.

I would recommend anyone encountering this issue to upgrade their version of terraform core to 0.12.5+ to fix this.

I will investigate possible ways to either prevent destroy on disk shrink or warn about a recreation, but there aren't many options from the provider side.

Added a warning in https://github.com/GoogleCloudPlatform/magic-modules/pull/2405, but that's the best I can think to do.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings