Terraform v0.10.2
terraform {
required_version = ">= 0.10.2"
backend "s3" {
bucket = "company-terraform-state"
key = "terraform/remotestate/instance.tfstate"
region = "us-west-1"
}
}
$ terraform workspace list
default
.
.
.
prjll1234-prod-private
union-dev-private
union-prod-private
* zzzzz-dev-private
$ terraform workspace new yyy-dev-private
Created and switched to workspace "yyy-dev-private"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
$ terraform workspace list
default
.
.
.
prjll1234-prod-private
union-dev-private
union-prod-private
* yyy-dev-private
$ terraform workspace delete union-dev-private
Deleted workspace "union-dev-private"!
$ terraform workspace list
default
.
.
.
prjll1234-prod-private
union-prod-private
* yyy-dev-private
zzzzz-dev-private
Since AWS S3 buckets can handle millions of objects, I expect Terraform to be able to handle thousands of state files in a single AWS bucket.
Once you exceed 1000 state files in a bucket, you can no longer list or select state files towards the end of the alphabet.
See _Debug Output_ above.
I've been able to reproduce this issue with different S3 buckets and with different versions of Terraform. I've verified it still doesn't work with the latest version (0.10.2).
None
We are having issues with this in 0.11.7. Does anyone know if this is expected to be fixed?
@jbardin I have the same problem. Badly written Jenkins pipeline created almost 2k workspaces and I was unable to clean the mess by using Terraform command. It can be fixed easily by iterating over the next results from the list S3 API or by trying to delete a workspace without checking if it exists (this would require nice exception handling but would theoretically scale up to a sick amount of workspaces if someone needs that). Terraform code uses a default S3 API call that returns the first 1000 results. I can do a pull request with an appropriate patch.
Was there ever a fix to this?
@nickdgriffin PR above :crossed_fingers: it gets in for the next 0.12 minor release :smile:
We're still getting bitten by this. =(
I'm with the same company as @rekahsoft.
It does not appear this made it to v0.11 can this get back ported?
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@jbardin I have the same problem. Badly written Jenkins pipeline created almost 2k workspaces and I was unable to clean the mess by using Terraform command. It can be fixed easily by iterating over the next results from the list S3 API or by trying to delete a workspace without checking if it exists (this would require nice exception handling but would theoretically scale up to a sick amount of workspaces if someone needs that). Terraform code uses a default S3 API call that returns the first 1000 results. I can do a pull request with an appropriate patch.