v0.9.0
terraform backend config
variable "azure_subscription_id" {
type = "string"
default = "74732435-e81f-4a43-bf68-ced435436edf"
}
variable "azure_tenant_id" {
type = "string"
default = "74732435-e81f-4a43-bf68-ced435436edf"
}
terraform {
required_version = ">= 0.9.0"
backend "azure" {
resource_group_name = "stuff"
storage_account_name = "morestuff"
container_name = "terraform"
key = "yetmorestuff.terraform.tfstate"
arm_subscription_id = "${var.azure_subscription_id}"
arm_tenant_id = "${var.azure_tenant_id}"
}
}
Variables are used to configure the backend
Error initializing new backend:
Error configuring the backend "azure": Failed to configure remote backend "azure": Couldn't read access key from storage account: Error retrieving keys for storage account "morestuff": autorest#WithErrorUnlessStatusCode: POST https://login.microsoftonline.com/$%7Bvar.azure_tenant_id%7D/oauth2/token?api-version=1.0 failed with 400 Bad Request: StatusCode=400.
terraform apply
I wanted to extract these to variables because i'm using the same values in a few places, including in the provider config where they work fine.
I am trying to do something like this; getting the same "configuration cannot contain interpolations" error. While it seems like this is being worked on, I wanted to also ask if this is the right way for me to use access and secret keys? Does it have to be placed here so that I don't have to check the access and secret keys to github
terraform {
backend "s3" {
bucket = "ops"
key = "terraform/state/ops-com"
region = "us-east-1"
encrypt = "true"
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
}
}
I have the same problem i.e. would love to see interpolations in the backend config. Now that we have "environments" in terraform, I was hoping to have a single config.tf with the backend configuration and use environments for my states. The problem is that I want to assume an AWS role based on the environment I'm deploying to. I can do this in "provider" blocks as the provider block allows interpolations so I can assume the relevant role for the environment I'm deploying to, however if I also rely on the role being set for the backend state management (e.g. when running terraform env select
) it doesn't work. Instead I have to use the role_arn
in the backend config which can't contain the interpolation I need.
I managed to get it working by using AWS profiles instead of the access keys directly. What I did though was not optimal; but in my build steps, I ran a bash script that called AWS configure that ultimately set the default access key and secret.
We want to archive something similar than @antonosmond. At the moment we use multiple environments prod/stage and want to upload tfstate files to S3.
## State Backend
terraform {
backend "s3" {
bucket = "mybucket"
key = "aws/${var.project}/${var.environment}"
region = "eu-central-1"
profile = "default"
encrypt = "true"
lock_table = "terraform"
}
}
In this case with above backend definition leads us to this Error:
Now if we try to hardcode it like this:
## State Backend
terraform {
backend "s3" {
bucket = "mybucket"
key = "aws/example/prod"
region = "eu-central-1"
profile = "default"
encrypt = "true"
lock_table = "terraform"
}
}
we get the following notification:
Do you want to copy only your current environment?
The existing backend "local" supports environments and you currently are
using more than one. The target backend "s3" doesn't support environments.
If you continue, Terraform will offer to copy your current environment
"prod" to the default environment in the target. Your existing environments
in the source backend won't be modified. If you want to switch environments,
back them up, or cancel altogether, answer "no" and Terraform will abort.
Is there a workaround for this problem at the moment, documentation for backend configuration does not cover working with environments.
Solved
seems my local test env was still running on terraform 0.9.1, after updating to latest version 0.9.2 it was working for me.
Do you want to migrate all environments to "s3"?
Both the existing backend "local" and the target backend "s3" support
environments. When migrating between backends, Terraform will copy all
environments (with the same names). THIS WILL OVERWRITE any conflicting
states in the destination.
Terraform initialization doesn't currently migrate only select environments.
If you want to migrate a select number of environments, you must manually
pull and push those states.
If you answer "yes", Terraform will migrate all states. If you answer
"no", Terraform will abort.
Hi,
I'm trying to the the same as @NickMetz, I'm running terraform 0.9.3
$terraform version
Terraform v0.9.3
This is my code
terraform {
backend "s3" {
bucket = "tstbckt27"
key = "/${var.env}/t1/terraform.tfstate"
region = "us-east-1"
}
}
This is the message when I try to run terraform init
$ terraform init
Initializing the backend...
Error loading backend config: 1 error(s) occurred:
* terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
Is this expected behaviour on v0.9.3?
Are there any workarounds for this?
In case it's helpful to anyone, the way I get around this is as follows:
terraform {
backend "s3" {}
}
data "terraform_remote_state" "state" {
backend = "s3"
config {
bucket = "${var.tf_state_bucket}"
lock_table = "${var.tf_state_table}"
region = "${var.region}"
key = "${var.application}/${var.environment}"
}
}
All of the relevant variables are exported at the deployment pipeline level for me, so it's easy to init with the correct information for each environment.
terraform init \
-backend-config "bucket=$TF_VAR_tf_state_bucket" \
-backend-config "lock_table=$TF_VAR_tf_state_table" \
-backend-config "region=$TF_VAR_region" \
-backend-config "key=$TF_VAR_application/$TF_VAR_environment"
I don't find this ideal, but at least I can easily switch between environments and create new environments without having to edit any terraform.
@gsirvas @umeat To archive multiple environment with the same backend configuration it is not necessary to use variables/interpolation .It is expected that is not possible to use variables/interpolation in backend configuration see comment from @christofferh.
Just write it like this:
terraform {
backend "s3" {
bucket = "tstbckt27"
key = "project/terraform/terraform.tfstate"
region = "us-east-1"
}
}
Terraform will split and store environment state files in a path like this:
env:/${var.env}/project/terraform/terraform.tfstate
@NickMetz it's trying to do multiple environments with multiple backend buckets, not a single backend. You can't specify a different backend bucket in terraform environments. In my example you could still use terraform environments to prefix the state file object name, but you get to specify different buckets for the backend.
Perhaps it's better to just give accross account access to the user / role which is being used to deploy your terraform. Deploying your terraform to a different account, but using the same backend bucket. Though it's fairly reasonable to want to store the state of an environment in the same account that it's deployed to.
@umeat in that case you are right, it is not possible at the moment to use different backends for each environment. It would be more comfortable to have a backend mapping for all environments what is not implemented yet.
Perhaps a middle ground would be to not error out on interpolation when the variable was declared in the environment as TF_VAR_foo
? Though this might require making such variables immutable? (Which is fine for my use case; not sure about others.)
I also would like to be able to use interpolation in my backend config, using v 0.9.4, confirming this frustrating point still exists. In my use case i need to reuse the same piece of code (without writing a new repo each time i'd want to consume it as a module) to maintain multiple separate statefiles.
Same thing for me. I am using Terraform v0.9.4.
provider "aws" {
region = "${var.region}"
}
terraform {
backend "${var.tf_state_backend}" {
bucket = "${var.tf_state_backend_bucket}"
key = "${var.tf_state_backend_bucket}/terraform.tfstate"
region = "${var.s3_location_region}"
}
}
Here is the error Output of terraform validate
:
Error validating: 1 error(s) occurred:
* terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
I needs dis! For many features being developed, we want our devs to spin up their own infrastructure that will persist only for the length of time their feature branch exists... to me, the best way to do that would be to use the name of the branch to create the key for the path used to store the tfstate (we're using amazon infrastructure, so in our case, the s3 bucket like the examples above).
I've knocked up a bash script which will update TF_VAR_git_branch every time a new command is run from an interactive bash session.
This chunk of code would be so beautiful if it worked:
terraform {
backend "s3" {
key = "project-name-${var.git_branch}.tfstate"
...
}
}
Every branch gets its own infrastructure, and you have to switch to master to operate on production. Switching which infrastructure you're operating against could be as easy as checking out a different git branch. Ideally it'd be set up so everything named "project-name-master" would have different permissions that prevented any old dev from applying to it. It would be an infrastructure-as-code dream to get this working.
@NickMetz said...
Terraform will split and store environment state files in a path like this:
env:/${var.env}/project/terraform/terraform.tfstate
Your top-level structure looks nice and tidy for traditional dev/staging/prod ... sure:
env:/prod/project1/terraform/terraform.tfstate
env:/prod/project2/terraform/terraform.tfstate
env:/staging/project1/terraform/terraform.tfstate
env:/staging/project2/terraform/terraform.tfstate
env:/dev/project1/terraform/terraform.tfstate
env:/dev/project2/terraform/terraform.tfstate
But what if you want to stand up a whole environment for project-specific features being developed in parallel? You'll have a top-level key for each story branch, regardless of which project that story branch is in...
env:/prod/project1/terraform/terraform.tfstate
env:/prod/project2/terraform/terraform.tfstate
env:/staging/project1/terraform/terraform.tfstate
env:/staging/project2/terraform/terraform.tfstate
env:/story1/project1/terraform/terraform.tfstate
env:/story2/project2/terraform/terraform.tfstate
env:/story3/project2/terraform/terraform.tfstate
env:/story4/project1/terraform/terraform.tfstate
env:/story5/project1/terraform/terraform.tfstate
It makes for a mess at the top-level of the directory structure, and inconsistency in what you find inside each story-level dir structure. Full control over the paths is ideal, and we can only get that through interpolation.
Ideally I'd want my structure to look like "project/${var.git_branch}/terraform.tfstate", yielding:
project1/master/terraform.tfstate
project1/stage/terraform.tfstate
project1/story1/terraform.tfstate
project1/story4/terraform.tfstate
project1/story5/terraform.tfstate
project2/master/terraform.tfstate
project2/stage/terraform.tfstate
project2/story2/terraform.tfstate
project2/story3/terraform.tfstate
Now, everything you find for a given project is under its directory... so long as the env is hard-coded at the beginning of the remote tfstate path, you lose this flexibility.
Microservices are better versioned and managed discretely per component, rather than dumped into common prod/staging/dev categories which might be less applicable on a per-microservice basis, each one might have a different workflow with different numbers of staging phases leading to production release. In the example above project1 might not even have staging... and project2 might have unit/regression/load-testing/staging phases leading to production release.
you'd think at the very least you'd be allowed to use ${terraform.env}
...
In Terraform 0.10 there will be a new setting workspace_key_prefix
on the AWS provider to customize the prefix used for separate environments (now called "workspaces"), overriding this env:
convention.
I know a +1 does not add much but yeah, need this too to have 2 different buckets, since we have 2 AWS accounts.
I was hoping to do the same thing as described in #13603 but the lack of interpolation in the terraform block prevents this.
+1
I think this would also be useful for https://github.com/hashicorp/terraform/issues/18632
Specifically, following the structure:
environments/
|-- dev/ # dev configuration
|-- dev.tf
|-- secret.auto.tfvars
|-- prod/ # prod configuration
|-- prod.tf
|-- secret.auto.tfvars
resources/ # shared module for elements common to all environments
|-- main.tf
If i have a secret.auto.tfvars
file in both dev and prod with different credentials they don't actually get used for the init and my ~/.aws/credentials
file appears to be used instead - which I think can catch you out easily (you can run the commands in an account you didn't intend to).
+1
Same issue with etcd:
```
Error loading backend config: 1 error(s) occurred:
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
Initializing the backend...
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
ERROR: Job failed: exit code 1
```
Why must I hard code these values... WHY?!?!?!
Facing the same issue even for version 0.11.10
terraform.backend: configuration cannot contain interpolations
It doesn't seem a good option to specify creds twice once in variables and again in config. Can we get any update on this as this is open from almost a year.
I used workspaces to create my dev and prod environments. Now I need to store state for them in different aws accounts. What kind of workaround do you recommend? I just need to pass one variable to my backend config... somehow...
The best workaround for this I've ended up with takes advantage of the fact that whatever you pass into init
gets rememebered by terraform.
So instead of terraform init
, I use a small wrapper script which grabs these variables from somewhere (like a .tf.json file used elsewhere, or an environment variable perhaps) and then does the call to init
along with the correct -backend-config
flags.
No value within the terraform block can use interpolations.
So sad
As @glenjamin said, while interpolation from terraform variables isn't possible, this is supported "outside" of the terraform configuration proper by using partial configuration of a backend and arguments to terraform init
.
As an example, you can have a configuration containing
terraform {
backend "s3" {}
}
and providing your configuration as command-line flags to init:
terraform init \
-backend-config="bucket=MyBucket" \
-backend-config="region=us-east-1" \
-backend-config="key=some/key"
it's also possible to use a simplified HCL configuration file to provide this data, i.e. having a file "myconfig.hcl":
bucket = "MyBucket"
region = "us-east-1"
key = "some/key"
and then running terraform init -backend-config=myconfig.hcl
If you're running terraform init
from a shell or a shell-based wrapper, it's also possible to set variables via TF_VAR_
environment variables and then also reuse those for the backend configuration. i.e. if I had some external wrapper that exported TF_VAR_bucket=MyBucket
and TF_VAR_environment=preproduction
, I could run init via a wrapper like:
terraform init \
-backend-config="bucket=$TF_VAR_bucket" \
-backend-config="region=us-east-1" \
-backend-config="key=terraform/applications/hello-world/${TF_VAR_environment}"
@jantman I understand your comment and this is what we do on our side, we wrap execution of terraform with another layer of scripts.
But it would be nicer if this could work OOTB so that we provide only environment vars and then executeinit
, plan
etc. without thinking about the partial backend setup parameters from CLI.
Thus, I still think that there is place for a nicer framework solution, although a workaround exists.
Facing the same issue for version 0.11.11
terraform.backend: configuration cannot contain interpolations
Can't use variables in terraform backend config block.
I just found myself reading this issue thread here after trying the following config:
backend "local" {
path = "${path.module}/terraform.tfstate.d/${terraform.workspace}/network.tfstate"
}
My goal is to use different state files to manage networking, security and app infrastructure.
The workaround with -backend-config
passed to init
would perhaps work locally but is it safe and convenient to use it in a team? I'm afraid of human error being an aftermath of using this workaround.
Also would find it super useful if the var
data was loaded before the backend started initializing. Would like to leverage different keys for state storage... but this makes it a lot more difficult. My config looks like:
terraform {
backend "s3" {
bucket = "xyz-${var.client}"
key = "tfstate"
region = "ap-southeast-2"
role_arn = "${var.workspace_iams[terraform.workspace]}"
}
}
Obviously it doesn't work - please load var definitions, it would be super helpful.
The best workaround for this I've ended up with takes advantage of the fact that whatever you pass into
init
gets rememebered by terraform.So instead of
terraform init
, I use a small wrapper script which grabs these variables from somewhere (like a .tf.json file used elsewhere, or an environment variable perhaps) and then does the call toinit
along with the correct-backend-config
flags.
We're in a similar situation; in our case, we want role_arn
to be a variable, because different people call Terraform with different roles (because different people have different privs). In our wrapper scripts right now, we have to pass different arguments to terraform init
than to e.g. terraform plan
, because it's -backend-config
for one and -var
for another.
Not a showstopper, but annoying -- it makes the wrappers more complicated and less readable and maintainable.
In my CI/CD pipeline (GitLab), I use envsubst
(gettext
package):
envsubst < ./terraform_init_template.sh > ./terraform_init.sh
This injects the environment variables.
Hi All,
I used a workaround for this and created folder structure as below
-Terraform
|
|-VM (This is a module which is called by main.tf to create a VM
|- vm.tf
| - variable.tf
|- dev
| - backend.tfvars
| - test
| - backend.tfvars
| - uat
| - backend.tfvars
| - backend.tf (Note - this is a normal backend file in the root folder, however the values are overwritten by the environment specific backend.tfvars file while running terraform init)
| - main.tf
| - variable.tf
I used the following command to run the terraform init
Init - terraform init -backend-config=./
This has helped me creating environment specific tfstate file remotely. Hope this is helpful
Thanks
Using -backend-config
didn't work for my terraform remote block, so I ended up using sed
with terraform's _override.yml
behaviour.
_server.tf_
terraform {
backend "remote" {
hostname = "app.terraform.io"
token = "##TF_VAR_terra_token##"
organization = "myorg"
workspaces {
name = "myworkspace"
}
}
}
_... replace vars, and run terraform init._
sed "s/##TF_VAR_terra_token##/$TF_VAR_terra_token/g" server.tf > server_override.tf
terraform init
So I understand that the backend dictates the behavior of the core, and that's why the backend is loaded before the core. In theory the interpolation behavior could be different for stuff like data resources. But since vars are considered static, maybe some limited interpolation could be added just for vars in the backend? That would buy us some more breathing room than the current CLI workarounds.
Jeez this product is loaded with hundreds of workarounds.
Properly modularizing Terraform adoption and making it as easy as possible for my developers to use would be a lot easier if I could just read the .tfvars file for variables so that I'm not asking people to dive into the .tf files. I'm absolutely baffled by why it was decided to forego variable interpolation from .tfvars files in the init blocks.
just ran into this today. we've got a fairly complex provider+backend config for all our AWS environments, having variables in these config sections would allow us to have one shared set of code to deal with it. as things stand, we've got to duplicate that code everywhere, and then update it everywhere any time we make minor changes.
this feels like a really significant design flaw which needs to be corrected. doubly so if there's anything else that prevents us from putting all this config in a module and just loading that module (which doesn't seem possible now because of how provider module inheritance works).
This is an example why infrastructure-as-code should be in a real programming language.
@richardARPANET there is boto3
for AWS or azure-sdk-for-python
but anyway it would be nice to have parametrized backends configuration :)
Can't use variables in backend config this is a real pain. I have multiple components and environment with teams working within their workspace. I can't use ${terraform.workspace}
withing the key. It's really unpleasant to hard code config values we loose flexibility of having infrastructure as code.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
I know there may be some sort of workaround, but its a pain.
@harbindersingh have you looked at overrides?
since I've started using environments, I'm having the same issue and with v0.12.6, it says:
Error: Variables not allowed
on state.tf line 4, in terraform:
4: bucket = var.be_state_bucketVariables may not be used here.
Can put together steps as the sensible workaround for the time being pls?
This feature is really required, loosing the flexibility here :(
Is there any answer for the above i am also facing the same issue of interpolation i need to pass variables to s3 backend any help...
I'm working around this issue with the terraform options -backend-config=backends/my-env.tf where backends/my-env.tf is like:
bucket = "my-s3-remote-state-bucket"
key = "terraform"
region = "<region>"
dynamodb_table = "myproj-tfstate-lock"
and my backend declaration looks like:
terraform {
backend "s3" {
}
}
Why not support direct input variables interpolation in the backend configuration? Interpolation in variables is ALREADY disabled, but they have the greatest feature - they can be passed by environment variables! I guess S3 backend can not influence input variables interpolation, right?
BTW, can anybody explain to me, how it is supposed to use the backend.s3.access_key parameter without interpolation? Store the entire access key in the .tf file? Security is for unconfident cowards? Or it is supposed to store plain .tf files in CI/CD environment variables and then save on disk on the fly?
This issue is the biggest fail of the "infrastructure as code" principle... Two and half years have passed since this issue was opened and still no solution, only workarounds for workarounds on workarounds.
BTW, can anybody explain to me, how it is supposed to use the backend.s3.access_key parameter without interpolation? Store the entire access key in the .tf file? Security is for unconfident cowards? Or it is supposed to store plain .tf files in CI/CD environment variables and then save on disk on the fly?
This issue is the biggest fail of the "infrastructure as code" principle... Two and half years have passed since this issue was opened and still no solution, only workarounds for workarounds on workarounds.
Look into the AWS_PROFILE variable.
https://www.terraform.io/docs/providers/aws/index.html#environment-variables
I've never worked anywhere that we've put keys anywhere in the tf code and using local ENV vars has worked fine.
If you wish for a more secure approach, look at https://github.com/99designs/aws-vault
I strongly recommend aws-vault
--
Fernando 🐼
On Fri, 25 Oct 2019, 13:43 stuzlogle, notifications@github.com wrote:
BTW, can anybody explain to me, how it is supposed to use the
backend.s3.access_key parameter without interpolation? Store the entire
access key in the .tf file? Security is for unconfident cowards? Or it is
supposed to store plain .tf files in CI/CD environment variables and then
save on disk on the fly?This issue is the biggest fail of the "infrastructure as code"
principle... Two and half years have passed since this issue was opened and
still no solution, only workarounds for workarounds on workarounds.Look into the AWS_PROFILE variable.
https://www.terraform.io/docs/providers/aws/index.html#environment-variables
I've never worked anywhere that we've put keys anywhere in the tf code and
using local ENV vars has worked fine.If you wish for a more secure approach, look at
https://github.com/99designs/aws-vault—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/13022?email_source=notifications&email_token=AABJDLRATRG443F4YK3DCZDQQLSXVA5CNFSM4DE5FWTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECIHKQQ#issuecomment-546338114,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABJDLTPU7PFDXNBSDKVQWLQQLSXVANCNFSM4DE5FWTA
.
Look into the AWS_PROFILE variable.
What if the state should be stored in another AWS account? https://www.terraform.io/docs/backends/types/s3.html#administrative-account-setup
If you wish for a more secure approach, look at https://github.com/99designs/aws-vault
Is it good for CI\CD? Locally I can store anything in files. But on the build host during deployment - what is expected to be used for security. The only acceptable way here, AFAIK, is to use _workaround_ and pass backend credentials one-by-one to the init command. It's an acceptable way, but still, that's a workaround outside the "infrastructure as code" principle.
Can anyone tell me how to create multiple environments and pass those to save multiple account state files on single bucket..
Actually I need to pass variables for S3 backend but variables not supported I need some alternative way to do for my issue any suggestion thanks in advance..
What I want means I am having one Dev web app I will give variables for there and create infra..
you can pass your backend config from command line while running terraform init
example:
terraform init -backend-config="bucket=${your-bash-variable-that-has-bknd-s3-bucket-name}" \
-backend-config="key=some-key" \
-backend-config="region=${your-bash-variable-that-has-aws-region}" \
-backend-config="access_key=${your-access-key-in-variable}" \
-backend-config="secret_key="${your-secret-key-in-variable}"
Though this does not really use terraform variables, atleast you can use bash variables.
Hi Thanks for the responce Can I have some example of bash script how we will maintain different variables in bash
@swarupdonepudi Yup, that's the only way here. Thanks for mentioning it again here.
Hashicorp team, people are having to add scripts and extra files to make this work. Can you give it some attention? maybe a --load-vars-before-backend argument? Think about the time each person looses until they figure it out it can`t be done :(
Please Hashicorp team, allow using variables when specifying the backend. Or at least give a reason on why it isn't possible at the moment.
This is indeed very frustrating and makes the configuration very messy. The --load-vars-before-backend
would make our lives much easier.
@bflad @apparentlymart This has been a really popular request for 2 and a half years now, any way we can get some traction on this?
For me, I want individuals to be able to specify the profile name they want to use to access the state file in S3 (the profile
option) because we have contractors that have different AWS profiles and I want them to be able to specify the correct profile in a variable instead of on the command line.
I want this.
For me, I want individuals to be able to specify the profile name they want to use to access the state file in S3 (the profile option) because we have contractors that have different AWS profiles and I want them to be able to specify the correct profile in a variable instead of on the command line.
I take the opposite approach in my devops paradigm, where each project provides tools/docs/code and template configs to get everything setup and running correctly, and we synchronize on using standard naming and organizing schemes within and across projects/clients. The names of the AWS profiles are included in this, like SSH and a bunch of others. It helps improve the experience remote and async debugging to boot!
What everyone miss is that even with automated wrapping scripts you can't set backend type. There must be a file in the root with empty backend config for -backend-config arguments to work.
Supposedly HashiCorp won't allow variables in backend config blocks and recommend using the CLI options or a file to be used.
See comment:
And Docs:
An example backend_config.hcl
:
# backend_config.hcl
url = "https://myartifactory.corp.com/artifactory"
repo = "terraform-state"
subpath = "myorg/myteam/myapp"
username = "myusername"
password = "passwordgoeshere"
Command:
terraform init -backend-config=../path/to/backend_config.hcl
I've noticed you can't mix and match the -backend-config=
statements, and that passing a password via the -backend-config
generally results in a 401 or 403 from my backend storage. So I have to define it in the config file. Which if I'm using something like Shovel, Rake, Thor, Gradle, or something I could template the file while grabbing the credentials from something like Vault or some other secrets store.
After performing the terraform init -backend-config....
portion I don't have to supply the backend configs again unless something dramatic changes. However, shovel
is handling that for me.
I ran into this today as well when I was looking for an easy way to split development, staging, production in a sane way underneath the s3 bucket that my team uses for state. I'll be using workspaces for now, but I'll also be sad about the forced directory structure that is not project-based on the top-level. :disappointed:
Like many others, we ended up doing a wrapper script. It resulted useful
for more than this reason.
On Mon, Feb 24, 2020 at 4:40 PM Kevin Tindall notifications@github.com
wrote:
[image: image]
https://user-images.githubusercontent.com/4098674/75192961-ee7a9a80-571a-11ea-8e3c-5e6b991ea3ed.png
I ran into this today as well when I was looking for an easy way to split
development, staging, production in a sane way underneath the s3 bucket
that my team uses for state. I'll be using workspaces for now, but I'll
also be sad about the forced directory structure that is not project-based
on the top-level. 😞—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/13022?email_source=notifications&email_token=AABL7IJK7AT2LCW6VTSEHCTREQ5ERA5CNFSM4DE5FWTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMZUISI#issuecomment-590562377,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABL7IKX6VBG63YWIOMHQPLREQ5ERANCNFSM4DE5FWTA
.
Using terraform init
along with a -backend-config
worked in azurerm
since it gave us a place where we can scope our backend config per environment.
Thankfully, this played nicely with our Azure CLI authentication into the storage backend as well.
Most backend's also support environment variables. Doing the wrapper script helps a lot and if ran in a CI/CD environment or something to orchestrate the commands, the values are trivial and prevent the hard coding of sensitive values.
We are currently storing these secrets in our Vault system and extracting them for the aws provider using the Terraform Vault Provider. It would be nice to be able to use those variables to configure our S3 backend as well.
2020 here and they still don't give a :middle_finger:
And yet another request for this.
As a rebuttal of the assertions in https://github.com/hashicorp/terraform/issues/22088#issuecomment-521056027 that this is not possible: It would seem to be technically practicable to have the back-end track what state it does and does not know and the, before it loads the state, allow evaluation of ant expression which it definitively knows the values for. This would includes any values that derive strictly from literal values in the .tf files, from explicitly set variables, maybe from variables set via TF_VER_ environment variables, etc.
I can't say how practical or easy to implement that would be (from a coding perspective), but it is something that is not infeasible (from a public interface perspective).
@apparentlymart - Fifth most upvoted issue. Ignored for years. These are the things that will get you a mass exodus for the next best thing.
hasn't there really been any reply by the Terraform team? or has this issue been replied elsewhere?
the code change isn't likely all that complicated and probably similar in concept to when I overhauled Packer's VMware modules - lazy evaluation. There too the evaluation was single-pass for no good reason. It should be straight-forward to mimic 'syntax check' mode to suck in all the necessary files, see if the blanks have been filled in, then proceed to actually eval the templates for real.
Rather than fixing upstream, TerraGrunt hacked their own solution which is unfortunate.
Yeaa, I am facing this as well!
I'm starting to look at pulumi.
Please do not comment on this issue if you don't have anything constructive to provide. Many people are subscribed to this issue and don't want to get pinged on unnecessary notifications.
One thing that would make life easier that isn't using variables in the config block : being able to access backend config attributes from the rest of the config.
Use case : you are constructing layered configs as recommended. You are deploying resources like Transit Gateways which are applicable across accounts, and want to be able to fetch the ARNs of gateways and their RAM shares from other accounts.
aws_iam_policy_document
datumaws_s3_bucket_policy
resource to apply this to your state bucketIf you could access the state bucket name from the backend in code, you could avoid this issue.
What I did was tokenize the values in the terraform script; then when we execute the automation it copies the "template" file into the real location and uses sed
to replace the tokens with the actual values.
What I did was tokenize the values in the terraform script; then when we execute the automation it copies the "template" file into the real location and uses
sed
to replace the tokens with the actual values.
I guess that envsubst could also be used for that & sometimes it's more convenient so it's going to be smth like that:
export CI_PROJECT_NAME=project01
s3-backend.tf:
terraform {
backend "s3" {
bucket = $CI_PROJECT_NAME
key = "$CI_PROJECT_NAME/terraform.tfstate"
region = "eu-central-1"
dynamodb_table = "$CI_PROJECT_NAME-locks"
encrypt = true
}
}
envsubst < s3-backend.tf | terraform apply -target -
We are currently storing these secrets in our Vault system and extracting them for the aws provider using the Terraform Vault Provider. It would be nice to be able to use those variables to configure our S3 backend as well.
I am facing the same problem, but for Swift. The only way i've found is to use a .sh file, who's logging in into Vault, retrieve OpenStack secrets, store them in environnement variables... That's pretty bad.
HashiCorp, this would be a very, very useful functionality !
Well years later and I still need to use Terragrunt, pretty much only for this reason now.
After some time using Terraform without saving my state file, I was finally thinking about save them. Because I have to deploy the same infrastructure for multiple clients I want to save each deploy state in each workspace. For that I was waiting to use variables on backend config but looks like a long fight.
At this moment, this issue is open for 3 long years so...
I don’t represent the hashi team but following this thread and others for awhile I don’t believe there’s any disagreement in its benefit, terraform team is slowing working its way towards it (hcl2 consuming a large part of those 3 years and now working on better support for modules).
In the mean time, although not ideal, a light wrapper script using cli vars works well. Or we even created a parser script that translated defined backend.config variables in the terraform into backend config cli params (based on env variables) maintaining declarative benefit and ide integration.
In the end this feature would be hugely helpful, only wanted to provide another perspective on the “long fight” verbiage
The way I'm handling this is defining the backend without the "key" parameter. Since key is a required parameter, terraform prompts me for it.
This is sorely needed
We issue dev environments to each dev, and so our backend config would look like
terraform {
backend "s3" {
bucket = "our-bucket"
key = "aws/${var.dev_code}/core/terraform.tfstate"
region = "eu-central-1"
}
}
Unfortunately we're stuck with using terragrunt for a single feature..
A flag for setting the backend would also be helpful. e.g. -backend-type=s3 , -backend-type=kubernetes , etc..
The end user's backend is not of concern to our terraform configuration. It would be helpful if it were possible to decouple it completely.
Still waiting to add variables 😆
We have started to see Terraform as being difficult to secure and this issue is not helping. We have a project that is being developed by a 3rd party and getting deployed in Azure. We don't want the devs to see the storage access key and the MSI approach is not going to work considering the costs of running a vm just to deploy with terraform.
We want collaboration between the 3rd party's devs and our guys easy so the securing of the state file's storage account would have been a lot easier if it was just allowed to be replaced by a variable. That way we could have replaced it via our key vault secrets as we do the others but no..it has been 3 years and no answer.
Instead we now have to do a nasty workaround by tokenizing that access key at the expense of developer convenience when cloning the repo and having to manually change the token file
So, we are looking at switching to Pulumi as they seem to understand this concept
@KatteKwaad
You could store the keys in Azure Key Vault, then get it using data
provider and use that value for the storage access instead of hardcoded value.
P.S. I hope I identified the Key Vault product right, we use AWS Secrets Manager, but the logic is the same.
Wrapper/Terragrunt seems to be the 2020 solution when you're deploying many modules to different environments.
I really like CloudPosse's solution to this problem. They push environment management complexity into separate docker images (ex. dev.acme.com, staging.acme.com, prod.acme.com) and modify the backend variables in each environments Dockerfile. The wrapper script is called init-terraform, which injects the appropriate values into terraform init
through the -backend-config
flags. This pattern lets you build additional ops tooling into a docker image (ex. aws-vault, k8s etc.).
I just finished deploying a 3 stage app, and ended up using workspaces which didn't feel right. I felt there should be a higher level abstraction of each environment such as a folder (terragrunt) or docker image (cloudposse).
Reference:
https://github.com/cloudposse/dev.cloudposse.co
https://github.com/cloudposse/staging.cloudposse.co
https://github.com/cloudposse/prod.cloudposse.co
Seem like you need CI instead of granting devs access to your state
--
Fernando 🐼
On Tue, 22 Sep 2020, 13:35 KatteKwaad, notifications@github.com wrote:
We have started to see Terraform as being difficult to secure and this
issue is not helping. We have a project that is being developed by a 3rd
party and getting deployed in Azure. We don't want the devs to see the
storage access key and the MSI approach is not going to work considering
the costs of running a vm just to deploy with terraform.
We want collaboration between the 3rd party's devs and our guys easy so
the securing of the state file's storage account would have been a lot
easier if it was just allowed to be replaced by a variable. That way we
could have replaced it via our key vault secrets as we do the others but
no..it has been 3 years and no answer.
Instead we now have to do a nasty workaround by tokenizing that access key
at the expense of developer convenience when cloning the repo and having to
manually change the token fileSo, we are looking at switching to Pulumi as they seem to understand this
concept—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/13022#issuecomment-696692149,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABJDLT2QK3SAEJDHCREXWLSHCKZ5ANCNFSM4DE5FWTA
.
So we're not granting them access to state as we're tokenizing the value out and securing it in KeyVault but the functionality to handle the process as a first class citizen is what is missing
This is one of the best threads ever. Nobody here is wrong. I found that Terraform is like perl (does anyone still use perl?) , there no one correct way to do something. If it works for you then "it is" the best solution.
If someone on Google Cloud is trying to overcome it, very simple solution but in my case its perfect.
I use:
terraform {
backend "gcs" {}
}
And in my cloud-build file:
steps:
- id: "Create Backend Bucket"
name: 'gcr.io/cloud-builders/gsutil'
entrypoint: '/bin/bash'
args: ['-c', gsutil mb -c standard -l europe-west3 gs://$_TF_BUCKET || true]
- id: 'Terraform init'
name: 'hashicorp/terraform:0.13.0'
entrypoint: 'sh'
args:
- '-c'
- |
echo ""
echo "*************** INIT STEP ***********************"
echo "*************************************************"
echo ""
echo "We'r always hitting reconfigure to kill terraform cache"
terraform init -backend-config "bucket=$_TF_BUCKET" --reconfigure || exit 1
And my variables are handled, I know it is not the same like var. in backend config, but its simple. And it works..
Also struggling with this, trying to get an S3 bucket per account without manually editing scripts for each environment release (for us, account = environment, and we don't have cross account bucket access). Disappointing to see that so many messy (IMO) workarounds are still being used because Terraform still can't handle this.
@Ettery use Jinja or Gomplate
Same issue, trying to create S3 and Dynamo resources for
terraform {
backend "s3" {
bucket
key
region
dynamodb_table
}
}
and deploy another project infrastructure in one flow
Is the reason for this limitation security? I'd like to understand why it is a thing.
Bump? Almost 4 years in the making and still not fix to this? I am on the most current version of Terraform. It would be nice if we were able to pass in variables to make the key interchangeable with say a tfvars variable.
oh well since after these years this issue is still open i think i will drop the issue i experience on here.
Trying to run terraform block with variables like so
terraform {
backend "azurerm" {
resource_group_name = var.statefile_storage_account_rg
storage_account_name = var.statefile_storage_account
container_name = var.statefile_container
key = var.statefile_name
}
}
But I get this error for terraform init >>>
Initializing the backend...
Error: Variables not allowed
on provider.tf line 8, in terraform:
8: resource_group_name = var.statefile_storage_account_rg
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 9, in terraform:
9: storage_account_name = var.statefile_storage_account
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 10, in terraform:
10: container_name = var.statefile_container
Variables may not be used here.
Error: Variables not allowed
on provider.tf line 11, in terraform:
11: key = var.statefile_name
Variables may not be used here.
this works fine if i dont use variables
seems variable are not allowed in that block
I've resolved implementing a tool which performs a sort of preprocessing over a .tf, resolving variables (and allowing to include other .tf snippets): Ie:
terraform {
backend "s3" {
bucket = @var(tfstatebucket)
key = "terraform.tfstate"
region = @var(region)
}
}
becomes:
terraform {
backend "s3" {
bucket = "mybucket"
key = "terraform.tfstate"
region = "eu-west-1"
}
}
We are also using this approach, I mean, we have a "template" file and we use envsubst
to create the final backend.tf
file "on the fly" inside the runner.
So, the 00-backend.tpl
file looks like:
terraform {
backend "azurerm" {
resource_group_name = "myResourceGroup"
storage_account_name = "myStorageAccount"
container_name = "myContainerName"
# Blob name
key = "myStatusFileName""
access_key = "${Deployment_variable_access_key}"
}
}
One of the first steps on the pipeline does:
- apk add --update gettext
- envsubst < 00_backend.tpl > 00-backend.tf
From this point, the runners understands that the 00-backend.tf
contains a valid Terraform Backend configuration.
Tedious, but it works. 🤷🏻♂️
Five hundred upvotes don't make sense for the Terraform team to implement this feature. So sad.
@apparentlymart, what's the Terraform team's position on this issue? Any planned changes? What's the problem to process script variables before processing the backend config? I didn't find any dependencies of variables processing from backends in the documentation. The word "backend" can not be found on page https://www.terraform.io/docs/configuration/variables.html. So that the explanation "core depends on the backend" doesn't seem to be consistent in relation to variables processing. It would be nice if you at least document how exactly different backends affect variables processing.
Another use case that should be considered is to use a data source for configuring a backend. This is particularly useful if HashiCorp Vault is being used for generating access and secret keys.
terraform {
backend "s3" {
key = "application.tfstate"
encrypt = true
bucket = "terraform-state"
dynamodb_table = "terraform-state-lock"
access_key = data.vault_aws_access_credentials.main.access_key
secret_key = data.vault_aws_access_credentials.main.secret_key
token = data.vault_aws_access_credentials.main.security_token
}
}
provider "vault" {
address = "https://vault.example.com"
}
data "vault_aws_access_credentials" "main" {
backend = "aws"
role = "admin"
type = "sts"
}
provider "aws" {
access_key = data.vault_aws_access_credentials.main.access_key
secret_key = data.vault_aws_access_credentials.main.secret_key
token = data.vault_aws_access_credentials.main.security_token
}
Another use case that should be considered is to use a data source for configuring a backend. This is particularly useful if HashiCorp Vault is being used for generating access and secret keys.
I think this would be even harder to do since the state stores some information regarding what provider is used by which resource.
Another use case that should be considered is to use a data source for configuring a backend. This is particularly useful if HashiCorp Vault is being used for generating access and secret keys.
terraform { backend "s3" { key = "application.tfstate" encrypt = true bucket = "terraform-state" dynamodb_table = "terraform-state-lock" access_key = data.vault_aws_access_credentials.main.access_key secret_key = data.vault_aws_access_credentials.main.secret_key token = data.vault_aws_access_credentials.main.security_token } } provider "vault" { address = "https://vault.example.com" } data "vault_aws_access_credentials" "main" { backend = "aws" role = "admin" type = "sts" } provider "aws" { access_key = data.vault_aws_access_credentials.main.access_key secret_key = data.vault_aws_access_credentials.main.secret_key token = data.vault_aws_access_credentials.main.security_token }
I dont know if you tested using Data in the backend block and it worked. The docs states "A backend block cannot refer to named values (like input variables, locals, or data source attributes)."
I believe we can close this given the solution provided at https://github.com/hashicorp/terraform/pull/20428#issuecomment-470674564. It's documented at TF_CLI_ARGS and TF_CLI_ARGS_name
This issue is duplicated by https://github.com/hashicorp/terraform/issues/17288, which is where the above reference comes from.
The suggested solution is good but still looks like a band-aid.
I believe we can close this given the solution provided at #20428 (comment). It's documented at TF_CLI_ARGS and TF_CLI_ARGS_name
_This issue is duplicated by #17288, which is where the above reference comes from._
I don't think that's an answer to this particular issue. That can be used as a workaround, perhaps but far from what's being asked for here.
Most helpful comment
In case it's helpful to anyone, the way I get around this is as follows:
All of the relevant variables are exported at the deployment pipeline level for me, so it's easy to init with the correct information for each environment.
I don't find this ideal, but at least I can easily switch between environments and create new environments without having to edit any terraform.