Terraform: Limited to 1000 state files with workspaces and S3 backend

Created on 30 Aug 2017  ยท  7Comments  ยท  Source: hashicorp/terraform

Terraform Version

Terraform v0.10.2

Terraform Configuration Files

terraform {
  required_version  = ">= 0.10.2"
  backend "s3" {
    bucket          = "company-terraform-state"
    key             = "terraform/remotestate/instance.tfstate"
    region          = "us-west-1"
  }
}

Debug Output

With 1000 objects in S3:

$ terraform workspace list
  default
  .
  .
  .
  prjll1234-prod-private
  union-dev-private
  union-prod-private
* zzzzz-dev-private

Add a new state file to the remote backend

$ terraform workspace new yyy-dev-private
Created and switched to workspace "yyy-dev-private"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.

List again. The zzzzz-dev-private item is missing.

$ terraform workspace list
  default
  .
  .
  .
  prjll1234-prod-private
  union-dev-private
  union-prod-private
* yyy-dev-private

Delete a state file

$ terraform workspace delete union-dev-private
Deleted workspace "union-dev-private"!

List again. The zzzzz-dev-private state is back.

$ terraform workspace list
  default
  .
  .
  .
  prjll1234-prod-private
  union-prod-private
* yyy-dev-private
  zzzzz-dev-private

Expected Behavior

Since AWS S3 buckets can handle millions of objects, I expect Terraform to be able to handle thousands of state files in a single AWS bucket.

Actual Behavior

Once you exceed 1000 state files in a bucket, you can no longer list or select state files towards the end of the alphabet.

Steps to Reproduce

See _Debug Output_ above.

Important Factoids

I've been able to reproduce this issue with different S3 buckets and with different versions of Terraform. I've verified it still doesn't work with the latest version (0.10.2).

References

None

backens3 bug v0.10 v0.11

Most helpful comment

@jbardin I have the same problem. Badly written Jenkins pipeline created almost 2k workspaces and I was unable to clean the mess by using Terraform command. It can be fixed easily by iterating over the next results from the list S3 API or by trying to delete a workspace without checking if it exists (this would require nice exception handling but would theoretically scale up to a sick amount of workspaces if someone needs that). Terraform code uses a default S3 API call that returns the first 1000 results. I can do a pull request with an appropriate patch.

All 7 comments

We are having issues with this in 0.11.7. Does anyone know if this is expected to be fixed?

@jbardin I have the same problem. Badly written Jenkins pipeline created almost 2k workspaces and I was unable to clean the mess by using Terraform command. It can be fixed easily by iterating over the next results from the list S3 API or by trying to delete a workspace without checking if it exists (this would require nice exception handling but would theoretically scale up to a sick amount of workspaces if someone needs that). Terraform code uses a default S3 API call that returns the first 1000 results. I can do a pull request with an appropriate patch.

Was there ever a fix to this?

@nickdgriffin PR above :crossed_fingers: it gets in for the next 0.12 minor release :smile:

We're still getting bitten by this. =(

I'm with the same company as @rekahsoft.

It does not appear this made it to v0.11 can this get back ported?

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings