0.9.1
documentation on s3 remote state locking with dynamodb
n/a
The documentation on s3 remote state and dynamodb lock tables is lacking. This leads to the scenario where a dynamodb table described thus:
resource "aws_dynamodb_table" "terraform_statelock" {
name = "foo"
read_capacity = 20
write_capacity = 20
hash_key = "LockId"
attribute {
name = "LockId"
type = "S"
}
}
Is not enough to use the table with the s3 state. It would be good if the documentation explicitly described the dynamodb definition to use with s3 remote state for locking.
Yes, please upgrade the docs and someone please let us know how to setup dynamodb locking. I am using S3 remote stat fine but do not know how to setup locking based on this https://www.terraform.io/docs/backends/types/s3.html
@sysadmiral you've got a typo, it's LockID, not LockId
@analytically thanks! Good spot! I'll check to make sure that wasn't what was causing my issue.
I still think it would be useful to describe the dynamodb setup in the docs though
In fact I think I changed this from the website docs LockID
to LockId
because of the error message I was getting:
Error locking destination state: Error acquiring the state lock: 2 error(s) occurred:
* ValidationException: One or more parameter values were invalid: Missing the key LockId in the item
status code: 400, request id: 5BA1JQUKAOPAF06BK7N68LHBNVVV4KQNSO5AEMVJF66Q9ASUAAJG
* ValidationException: The provided key element does not match the schema
status code: 400, request id: 0Q47JCLT0G5V4U68R6I1SSCUDNVV4KQNSO5AEMVJF66Q9ASUAAJG
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
That seems to indicate LockId
is missing hence why I think it would be useful to get this properly documented.
For anyone arriving here with dynamodb locktable issues this definition worked for me in the end:
resource "aws_dynamodb_table" "terraform_statelock" {
name = "foo"
read_capacity = 20
write_capacity = 20
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
So LockID
should indeed be LockID
not LockId
like the error message suggests.
Other way of creating the table with awscli:
aws dynamodb create-table \
--region us-east-1 \
--table-name terraform_locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
You can do it in one go, create S3 bucket, with the account prefix to make it unique, the dynamodb table, and the config file with this snippet:
PROJECT_NAME="${PWD##*/}" # use current dir name
AWS_REGION="eu-west-1"
ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
aws s3api create-bucket \
--region "${AWS_REGION}" \
--create-bucket-configuration LocationConstraint="${AWS_REGION}" \
--bucket "terraform-tfstate-${ACCOUNT_ID}"
aws dynamodb create-table \
--region "${AWS_REGION}" \
--table-name terraform_locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
cat <<EOF > ./backend_config.tf
terraform {
backend "s3" {
bucket = "terraform-tfstate-${ACCOUNT_ID}"
key = "${PROJECT_NAME}"
region = "${AWS_REGION}"
lock_table = "terraform_locks"
}
}
EOF
Is this one dynamodb table per S3 remote state then? As in, if we use multiple S3 keys (within the same bucket) for multiple projects, do we need as many dynamodb tables as we have keys?
@raphink - in your state config you can specify the same lock table in multiple different configs _but_ you could also specify a separate lock table per config. It's up to you and I guess it depends on if different parts of your infrastructure have dependencies or not.
Is this one dynamodb table per S3 remote state then? As in, if we use multiple S3 keys (within the same bucket) for multiple projects, do we need as many dynamodb tables as we have keys?
@raphink @sysadmiral I was wondering the same. Right now I am using one dynamoDb table for multiple state files. Also I am using one S3 bucket to host multiple state files (under different keys). Is this the correct approach or should I create multiple tables for every state file respectively? Right now I am getting on first run (there is already a lock entry in the table and a state file in S3 from a different Terraform setup which is not directly related to the one where I am getting the error):
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now reconfigure for this backend. If you didn't
intend to reconfigure your backend please undo any changes to the "backend"
section in your Terraform configuration.
Do you want to copy the state from "s3"?
Would you like to copy the state from your prior backend "s3" to the
newly configured "s3" backend? If you're reconfiguring the same backend,
answering "yes" or "no" shouldn't make a difference. Please answer exactly
"yes" or "no".
@bitbrain think about your usage patterns. If a set of infrastructure maps to one terraform state then ask yourself if that infrastructure is independent or not of any other other infrastructure? If there is a dependency then you likely want to use the same locktable. If they are independent then it should be safe to work on them separately and have a separate locaktable per state. But remember that dynamodb tables have a financial cost attached too. If a set of infrastructure isn't worked on that often and/or your team isn't very big then do you need a locktable at all for some things?
Also I am using one S3 bucket to host multiple state files (under different keys). Is this the correct approach or should I create multiple tables for every state file respectively?
Again this depends on your preference/requirements but I use this same pattern (one bucket, multiple statefiles). I haven't had any issues with that but we don't operate "at scale".
The message you see on first run is terraform asking if you would like to migrate the statefile from the old location to the new one defined in the backend config. n.b. currently terraform will ask you about migrating even if you are making a minor change to the existing config and not actually _moving_ the statefile.
@sysadmiral Thanks for the info!
@sysadmiral your example should be in the docs. thank you for providing it.
@cornfeedhobo , wherabouts in the docs is it? I'm looking at https://www.terraform.io/docs/backends/types/s3.html and can't find this info, would definitely help if it were there.
@endofcake it is not in the docs. my comment is that it should be in the docs.
@sysadmiral First of all, thank you for the example, but also please reopen this issue since nothing's added anywhere still.
I extended @keymon's code and created an aws_terraform_bootstrap.sh script. I would love to get feedback or help with anything in this DeDevSecOps gitlab space!
https://gitlab.com/dedevsecops/aws/blob/master/aws_terraform_bootstrap.sh
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
For anyone arriving here with dynamodb locktable issues this definition worked for me in the end:
So
LockID
should indeed beLockID
notLockId
like the error message suggests.