Hello, Im struggling a bit with remote state in s3. I want the config to be completely encapsulated in a module:
resource "aws_s3_bucket" "aws_acct_tfstate" {
bucket = "aws-acct-tfstate"
acl = "private"
#logging {
#target_bucket = "${aws_s3_bucket.aws_acct_log.id}"
#target_prefix = "tf_state_log/"
#}
tags {
Name = "AWS Account TF State Bucket"
Role = "Operations"
Environment = "Dev"
Consumer = "Terraform"
}
}
resource "null_resource" "config_tf_cli_remote" {
provisioner "local-exec" {
command = <<EOT
terraform remote config \
-backend=s3 \
-backend-config="bucket=${aws_s3_bucket.aws_acct_tfstate.arn}" \
-backend-config="key=terraform.tfstate" \
-backend-config="region=us-west-2"
EOT
}
}
The bucket is created and terraform is configured to use remote but I tet an error saying the bucket does not existing, including when i try to destroy, gist of full output:
$:~/Projects/tf_aws_s3_state$ make destroy
terraform plan -destroy -out terraform.tfplan
Error reloading remote state: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: 87A08A7169E8698C
Now if I were to use init rather than remote config then I could set the remote state _only_ for the project directory where as remote config is system (tf cli) wide?
$ terraform init \
-backend=s3 \
-backend-config="bucket=your-s3-bucket" \
-backend-config="key=tf/path/for/project.json" \
-backend-config="acl=bucket-owner-full-control" \
/path/to/source/module
Third question/issue, what is the security ramifications of the state file? I do not see secrets... anything wrong with a state file being public?
I did get things to work with the the local-exec script (script had different env var than my provider), but turns out there is an undocumented resource that does the job the right away. Thank you Mattias:
resource "terraform_remote_state" "remote_state" {
depends_on = ["aws_s3_bucket.aws_acct_ops"]
backend = "s3"
config {
bucket = "${aws_s3_bucket.aws_acct_ops.bucket}"
key = "acct.tfstate"
}
}
Only half worked. terraform remote config complains about not being able to pull from the empty bucket. Adding -pull=false doesnt work if the remote state has newer info. I finally gave up and am configuring remote state with a bash script:
region=$AWS_DEFAULT_REGION
bucket=$REMOTE_STATE_BUCKET
key=$REMOTE_STATE_NAME
config_args=" \
-backend=S3 \
-backend-config="region=$region" \
-backend-config="bucket=$bucket" \
-backend-config="key=$key" \
"
bucket_exists () {
if aws s3 ls "s3://$bucket" 2>&1 | grep -q "NoSuchBucket"
then
return 1
else
return 0
fi
}
create_bucket () {
aws s3 mb s3://$bucket && echo "Created S3 bucket, $bucket." >&2
}
if bucket_exists
then
terraform remote config $config_args
else
create_bucket
config_args="$config_args -pull=false"
terraform remote config $config_args
terraform remote push
fi
Clever. :) We plan on supporting remote state config in Terraform configurations in an upcoming version but don't at the moment.
Sweet. Mind pointing to a road map or some sort of eta?
Sorry please check core + enhancement labeled issues and it should be more generally related to configuring remote state using files.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.