terraform env create command crashed while creating a new environment with s3 backend.
Terraform v0.9.6
terraform env, s3 backend
terraform.state file:
{
"version": 3,
"serial": 0,
"lineage": "76108456-5991-4cfe-934b-e7ed998de376",
"backend": {
"type": "s3",
"config": {
"bucket": "newton-tf-state",
"key": "newton/sandbox-automated",
"lock_table": "newton-tf-state-lock",
"region": "us-east-1"
},
"hash": 16566994763343105771
},
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
}
]
}
https://gist.github.com/patrickconant/60de212c5d53388aa0d8734521f08db9
https://gist.github.com/patrickconant/9f46ebfec9c48bf96cd238a39bae7b52
Environment created
Terraform crash, environment left in invalid state. Deleting the environment reported success:
+ terraform env delete MR_3791
Environment "MR_3791" doesn't exist!
You can create this environment with the "new" option.
````
But subsequent to create an environment with the same name failed:
The command being run was 'terraform env new MR_3791'. But terraform doesn't crash every time I run that command. Just this once (so far).
Crash happens when you try to create an env of the same name once it's been created and deleted previously.
Notes:
s3 backed state tracking with dynamo locking. Once a crash happens, there is a lock entry left in dynamo. Crash happens even if a stuck dynamo lock has been deleted.
I've isolated this problem down to an entry in dynamo that doesn't get removed when the env is deleted.
newton-tf-state/env:/MR_3791/newton/sandbox-automated-md5
If I manually remove this entry, the env new command works again.
➜ terraform env new test-123
Created and switched to environment "test-123"!
➜ terraform env select default
Switched to environment "default"!
➜ terraform env delete test-123
Deleted environment "test-123"!
➜ tf env new test-123
state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value:
have reproduced this bug, and it only seems to happen with the DynamoDB locktable.
The work around right now seems to be to delete the corresponding lockId in dynamoDB.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I've isolated this problem down to an entry in dynamo that doesn't get removed when the env is deleted.
If I manually remove this entry, the env new command works again.