Terraform: provider/aws: Don't always update DynamoDB read/write capacity

Created on 14 Mar 2016  ยท  4Comments  ยท  Source: hashicorp/terraform

Hi guys!

First of all, thank you for your massive amount of work: Terraform is improving everyday.

I'm using Terraform to provision DynamoDB tables. Currently, read_capacity and write_capacity are required arguments, so that you can specify default values for read and write initial capacity:

resource "aws_dynamodb_table" "accounts" {
    name = "foo-staging-accounts"
    read_capacity = 1
    write_capacity = 1
    hash_key = "ACC"

    attribute {
        name = "ACC"
        type = "S"
    }

    attribute {
        name = "FBID"
        type = "S"
    }

    global_secondary_index {
        name = "fbid-index"
        read_capacity = 1
        write_capacity = 1
        hash_key = "FBID"
        projection_type = "ALL"
    }
}

The problem is that I'm using a tool called Dynamic DynamoDB in order to automatically adjust the provisioned capacities, depending on the actual consumed capacities. But when I want to plan or apply changes with Terraform, it always tries to update the capacities to the default values I have in .tf files. With the example above, it will always try to set read and write capacities to 1 (for the global secondary index too), even if Dynamic DynamoDB changed them because of a traffic increase.

I would love to solve this issue by adding a new argument to aws_dynamodb_table resource: something like update_capacities (or maybe better two new ones, update_read_capacity and update_write_capacity): if set to false, Terraform will not try to update capacities if the table has been already created. If the table is not present yet, Terraform will create it as always, setting the default capacities accordingly.

What do you think guys? Do you have a better idea? How would you solve this issue without touching Terraform code?

bug provideaws

Most helpful comment

Maybe I am doing it wrong, but the ignore_changes appears to only work for top level resource settings. It does not appear that I can do the same for a GSI setting which looks like "global_secondary_index.4003134.write_capacity" where "4003134" appears to be a dynamically created id.

All 4 comments

I am in a similar scenario. The ability to ignore capacity is much needed!

At the end I solved the issue using ignore_changes as described here. I found that possibility after I opened this issue. So I think that this issue could be closed, as Terrafom already provides a mean to achieve that.

Maybe I am doing it wrong, but the ignore_changes appears to only work for top level resource settings. It does not appear that I can do the same for a GSI setting which looks like "global_secondary_index.4003134.write_capacity" where "4003134" appears to be a dynamically created id.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

FlorinAndrei picture FlorinAndrei  ยท  61Comments

nevir picture nevir  ยท  82Comments

lukehoersten picture lukehoersten  ยท  151Comments

kforsthoevel picture kforsthoevel  ยท  86Comments

dupuy picture dupuy  ยท  61Comments