I just changed the number of nodes and this is the diff that is produced:
-/+ google_bigtable_instance.prod-instance (new resource required)
id: "us-central1-prod" => <computed> (forces new resource)
cluster_id: "us-central1-prod-cluster" => "us-central1-prod-cluster"
display_name: "us-central1-prod" => <computed>
instance_type: "PRODUCTION" => "PRODUCTION"
name: "us-central1-prod" => "us-central1-prod"
num_nodes: "3" => "4" (forces new resource)
project: "grafanalabs-global" => <computed>
storage_type: "SSD" => "SSD"
zone: "us-central1-f" => "us-central1-f"
This is dangerous and we're manually scaling the cluster and adding a lifecycle block with prevent_destroy. You should be able to resize solely via terraform.
$ terraform -v
Terraform v0.11.10
+ provider.google v1.19.1
You're right - you should be able to. Let me make sure the API supports that, and I'll see what I can do.
Shoot. The SDK, https://github.com/GoogleCloudPlatform/google-cloud-go/tree/master/bigtable, doesn't support the update method in the API. We'd have to convert the whole resource to using the REST API directly. That'd be part of the Magic Modules conversion effort - it will happen someday - but it's not likely in the very short term. :/
Blocked on https://github.com/googleapis/google-api-go-client/issues/300
The current API client we're using doesn't let us GET the number of nodes, or do update. I'm trying to get a REST client generated for bigtable, and once that's done we'll be able to convert to that client by hand or autogenerate the resource with Magic Modules. I don't have a timeline on either, unfortunately.
@rileykarson any update on this?
Unfortunately not! I'm trying to get the client generated but the team that's supposed to perform the next step hasn't done it yet. I'll ping them again.
Oh wait- it looks like it appeared sometime in the last few weeks. 🎉
https://github.com/googleapis/google-api-go-client/tree/master/bigtableadmin/v2
@ndmckinley — can you please also file a bug on https://github.com/googleapis/google-cloud-go/tree/master/bigtable with the details on what resource objects need to have a proper Update method?
We'd rather integrations use the client library SDK rather than the auto-generated methods in the googleapis/google-api-go-client so that should be feature-complete and if it's missing something, there should be a feature request for it.
Thanks!
Hey @mbrukman - the next step for us is to use Magic Modules to generate our Bigtable integrations; MM has a requirement to have the autogen REST client right now, so we need to use that. We aren't blocked on features in the google-cloud-go client anymore with that generated. (And we'll likely cut the requirement to use the autogen client in around the same amount of effort as using google-cloud-go instead of the autogen)
Generating bigtable support with Magic Modules is now possible, and that's the next step forwarding for adding update support for clusters. The issue is, the Bigtable API (whether REST or gRPC) doesn't map especially well to how we expect GCP resource to be shaped.
Namely, we're required to provide clusters at creation time of the resource embedded into the instance resource (https://cloud.google.com/bigtable/docs/reference/admin/rest/v2/projects.instances/create), but it's then best to treat them as distinct resources at a separate API endpoint (https://cloud.google.com/bigtable/docs/reference/admin/rest/v2/projects.instances.clusters).
This is similar to both network/subnetwork and GKE cluster/nodepools; cases like network/subnetwork map _much_ better to Terraform, where the child resource is entirely distinct, versus cases like GKE nodepools where there's an awkward split between the resources.
I have a pretty good idea of how I'd like to support the cluster update usecase, but it's going to be an unusual resource representation in Magic Modules, so I'm going to create a draft PR and/or design doc and get some feedback on that before moving forward- whether that's with MM or with handwritten changes to the resource.
Hi @rileykarson! Any updates?
I just hit this one too. The cluster is trying to be recreated with every change. Any status update is welcome.
Any updates ? Magic Module is synced now, no ?
I didn't end up having cycles to accomplish this previously. While investigating, I found that it's impossible to use Magic Modules for this today because of how clusters work (MM can't deal well with an object that must appear at creation time that's then effectively managed separately as another resource).
It should be possible to add update support by keeping the resource handwritten and changing from the handwritten GRPC client to the generated REST one, but there's a chance it will fall afoul of the issues encountered with GKE node pools in https://github.com/terraform-providers/terraform-provider-google/issues/780#issuecomment-444957526.
@rileykarson If the gRPC-based Go client for Bigtable had an UpdateInstance method would that unblock this?
@garye if the update method on instance supported updating the list of clusters, this would be trivial to add. Even better, if the functionality was included in update on the REST API we could generate google_bigtable_instance with our code generator.
The underlying API for updating an instance just lets you upgrade from DEV->PRODUCTION. Sounds like you need to be able to do that, as well as update the size of clusters? What what an ideal client library (non-REST) method or methods look like from your perspective?
We can also look into the REST stuff but I just know less about it...
From my perspective, the ideal client method would be an update method that accepts the same body as create and that modifies the current instance to reach that state (failing if it's impossible). In the gRPC client, that's InstanceWithClustersConfig
Ok that's doable, and "current cluster" means whatever clusters are present in the config object?
We also need to return the instance type from Instances and InstanceInfo, right?
Sorry- s/cluster/instance there. Modified the original post.
Yep, that would also help. Terraform maintains local state so it would be possible to update an instance's type if a user specified DEVELOPMENT and replaced it with PRODUCTION locally in config, but won't be able to pick up the change if it's made out of band until we can read it from the API. (So if a user made the change manually and then modified it in config afterwards, Terraform would attempt a spurious update.)
Ok this is really useful, thanks!
A new UpdateInstance method that affects more than one thing (the instance itself + a cluster, or resizing two clusters) could possible partially succeed as there is no way to know up front if a particular Instance/Cluster update operation will succeed. You can read the state after an error to figure out what happened.
Is this acceptable?
Yep- if an update fails, Terraform will be able to refresh state based on which changes were applied.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
I just hit this one too. The cluster is trying to be recreated with every change. Any status update is welcome.