Terraform v0.9.1
N/A
# ./main.tf
module "module_us_west_1" {
source = "./module"
region = "us-west-1"
}
# ./module/main.tf
variable "region" {
description = "AWS region for provider"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_cloudwatch_log_group" "rds_os" {
name = "RDSOSMetrics"
retention_in_days = 30
}
https://gist.github.com/ff54870fee49636209ecfaa5de272175
N/A
Resource was imported
Error importing: 1 error(s) occurred:
* module.module_us_west_1.provider.aws: 1:3: unknown variable accessed: var.region in:
${var.region}
terraform import module.module_us_west_1.aws_cloudwatch_log_group.rds_os "arn:aws:logs:us-west-1:FILTERED:log-group:RDSOSMetrics:*"
N/A
N/A
Same issue here on v0.9.2. Seems related to #7774 (closed).
I'm seeing the same issue with the datadog provider, so this isn't just AWS.
I've found that adding a provider alias resolves this issue. Something static works, like alias "myregion"
.
It looks like you can sort of get around this by aliasing the provider.
provider "aws" {
alias = "${var.region}"
region = "${var.region}"
}
resource "aws_cloudwatch_log_group" "rds_os" {
provider = "aws.${var.region}"
name = "RDSOSMetrics"
retention_in_days = 30
}
This may also be worth a read: https://github.com/hashicorp/terraform/issues/1819
As is this: https://github.com/hashicorp/terraform/issues/3285
I'm having the same issue as well with Terraform 0.9.4
Seeing this behavior with Terraform 0.9.6 and a similar use case of a module with a variable in the provider:
# module/variables.tf
variable "aws_profile" {
description = "AWS profile for the provider"
type = "string"
}
variable "aws_region" {
description = "AWS region for the provider"
type = "string"
}
# module/provider.tf
provider "aws" {
profile = "${var.aws_profile}"
region = "${var.aws_region}"
}
* module.EXAMPLE.provider.aws: 1:3: unknown variable accessed: var.aws_profile in:
${var.aws_profile}
This behavior still exists in v0.10.3. Any update on this?
Hi all! Sorry for this limitation and for the long silence here.
Unfortunately the import system struggles a bit with more complex situations like this because it builds a different sort of graph to deal with the import case.
From the example config and output given here, it looks like the interpolation context isn't being constructed correctly when in import mode, and so the variables from the parent module aren't coming through correctly. This is definitely a bug, but is likely fiddly to fix. :confounded:
The fact that aliasing the provider makes this work is interesting. In that situation, is there an unaliased provider "aws"
block (with a _literal_ region
) in the parent/root module that could be being used instead? That's the best explanation I can come up with for why that didn't produce an error that the region needs to be set.
@apparentlymart - sounds fiddly indeed. Is there anyone assigned to this bug who can get fiddlin?
There isn't anyone available to look at this right now, but want to get to it eventually. Any additional information we can gather in the mean time (including the question at the end of my previous comment) could help explain the issue here and thus make it easier to plan about.
To be honest, the current import functionality is primarily focused on simple cases where people are getting started and importing for the first time, so its design struggles with more complex scenarios. We will probably end up having to rethink it in a broader sense before too long in order to make it more usable.
Our usual expectation is that import is something people use only for a short period when they are getting started with Terraform, but from the comments here I get the sense that some or all of you are using it in a more ongoing/routine manner. Is that right? If so, it'd be great to hear what you're using it for since that will help inform future changes to make it more generally-usable.
@apparentlymart - I can't answer your question personally as the "workaround" didn't work for me...
I won't say I'm a terraform expert just yet - I've been using "import" to migrate legacy deploys of an app we develop into terraform control so that future updates can be automated with terraform.
Would you suggest a different/faster method for pulling existing resources into state so that terraform does not try to destroy/recreate them?
I've not personally run into this since I've only used import
in pretty simple cases, but when faced with this problem I would probably try to work around it like this:
terraform state mv
to move the state for the imported resource over into the target module.terraform plan
to make sure things have settled.I'm not under any illusion that the above is a great workaround, but I think it would work and what I'd try when faced with is problem right now.
If the root module's config is explicitly using a different region, a variant of the above would be to temporarily create a child module that exists _only_ to receive imports, with a static provider config, and import into that before moving into the final location.
To be completely honest, I'm not sure why the workaround of temporarily adding an alias works, so I would not have thought to try that. If it _is_ working, I expect it's due to some edge case and thus not something I'd expect to work reliably.
@iancward's alias
workaround worked for me.
the alias workaround is not working for me
We're using dynamic configuration of modules to be able to store our secrets (postgresql password, datadog API keys...) in Vault and pull them to be used by terraform when importing, planning and applying. Hinting that we should just give everyone passwords for everything and risk them be committed to source control doesn't look like sound advice.
Furthermore, this happens any time we're importing anything, even from another provider. So if I want to start using Terraform to provision a new provider, I have to go to my other providers and replace all the dynamic references to static ones again.
I did exactly what @apparentlymart suggested here. What happened is that the terraform plan
completely plowed over my carefully constructed terraform state. I think that it's actually only terraform refresh
that's causing the state to be changed.
In the root module, I have an aws provider that is unaliased. It seems that in my modules where I use different providers, associated with different aws accounts, the resources in there got replaced with resources associated with that was provider that is in the root module.
EDIT: It was only terraform refresh
that destroyed state. terraform plan
works perfectly fine. Aliasing the unaliased provider after doing all of the imports and moves makes terraform refresh
not destroy state.
I have a slightly different concern about this problem, which has gotten worse with 0.10 changes to how imports work. In order for the workaround of importing to the root module, then doing terraform state mv
, you also need to create temporary resource stubs in the code. This has practical implications that make the import process truly burdensome.
My understanding is that the requirement to stub out the imported resources was imposed to prevent people from then immediately accidentally destroying those imported resources by running terraform apply
. But with the new defaults that default to requiring interactive approval before application, this is surely a much smaller concern.
As far as the provider config, I'm also willing to specify all of that on the command line. So it'd be great if we had the option to ignore the resource and provider config in the tf code, and just specify the provider args on the command line, and give a path we wish to import to.
This is becoming a bigger problem for us as we try to get more people in our organization to use Terraform. The lack of ability to import into any but the most explicitly specified code is a real block to adoption.
this works good for me on v0.10.7 with Google provider.
This is really becoming a major issue for adopting terraform for our current app. It is composed of multiple modules (vpc, dns, auth, api, cms, portal, etc) that have dependencies upon one another. This is very easily definable using a root module that passes output vars of one module to the input vars of another, but then we lose the ability to import existing infrastructure (which is a common scenario for some of our legacy apps). The alternative is to write and maintain a ton of scripts that do the same thing while keeping each module independent.
The response from the terraform dev team isn't giving us a lot of faith in the platform either, it's been 10 months since this issue was opened and we've seen very little actual focus on it. We're currently investigating alternatives to terraform as a result.
I'd love to keep using terraform, but as the weeks go by I'm seeing many such pain points - and it's getting harder to convince the rest of the team to keep moving forward.
@apparentlymart - it's been 3 months since we've had any update from the team on this issue, other than saying "it's fiddly to fix". Some bugs are fiddly, I get it. But these are two of the features that drew us to terraform...
1) Modular definitions
2) The ability to import existing infrastructure into TF management.
Let's get this fixed, or honestly, we're moving on.
Suggested solution with aliases does not solve the problem for me. It fixes one, and adds another - terraform keeps asking for provider.aws.region
each time I run import / plan / apply / destroy.
$ terraform version
Terraform v0.11.2
+ provider.aws v1.7.1
$ tree
.
├── main.tf
└── modules
└── environment
└── env.tf
$ cat main.tf
module "uswest" {
source = "modules/environment"
sg_name = "test_sg_uswest"
region = "us-west-2" # Oregon
}
module "useast" {
source = "modules/environment"
sg_name = "test_sg_useast"
region = "us-east-1" # N. Virginia
}
$ cat modules/environment/env.tf
variable "region" {}
variable "sg_name" {}
provider "aws" {
alias = "${var.region}"
region = "${var.region}"
}
resource "aws_security_group" "sg" {
name = "${var.sg_name}"
$ terraform plan
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Default: us-east-1
Enter a value:
Is there any sort of ETA on this at all?
My hacky workaround for this (that probably won't work in more complex situations) is just to fill in ${var.region} 's in my module with the actual string value of the region while I'm importing a resource from. After the import I put the ${var.region} reference back in and I can verify via terraform plan
that my resource has successfully been imported.
This works for me because my main.tf files only refer to one region. If you had a main.tf with the same module creating resources in multiple regions, this work around probably wouldn't work for you.
On migration to new terraform version, our state got a bit corrupted (whole dependency chain of kubernetes got removed from state). Now I am facing this issue with inability to import back world state due to dynamically configured backend.
I have a kubernetes
provider and a cloudflare
provider configured from a google_container_cluster
and google_kms_secret
data source, respectively. I cannot import any google cloud resources due to that, but if I comment out those two providers' definitions, it'll prompt me for the Cloudflare token on the command line and import correctly.
a workaround that I'm doing is that, when doing imports, I temporarily comment out any modules that have this problem, then re-comment them back in after the import is done..... that enables the import to take place w/o issues, it seems....
Importing works using the new providers
attribute inside module
introduced in version 0.11
(tested with version 0.11.5
):
# ./main.tf
provider "aws" {
region = "eu-central-1"
alias = "central-1"
}
module "vpc" {
source = "./vpc"
# This is the new attribute:
providers = {
aws = "aws.central-1"
}
}
# ./vpc/main.tf
provider "aws" {} # use an empty block until https://github.com/hashicorp/terraform/issues/16835 is fixed
resource "aws_vpc" "default" {
cidr_block = "172.31.0.0/16"
}
The provider
inheritance is now obvious and available in the Terraform state even if the config for a module is deleted.
Note the proposal for separating provider
version constraints from provider
configuration (#16835)
:100: @Dominik-K for pointing this out. Everytime I've tried using import has involved living nightmares of workarounds, and now it seems to work smoothly and correctly.
This is still an issue. I tried overriding like @Dominik-K mentioned and randomly works for some but not others.
I tested with 11.3 and 11.7
We are also running into this problem with 11.7. Can the import command be modified to allow variables to passed in for these situations? Moving forward, many people will not want things like client secrets exposed in source control.
@stefanthorpe Does it work then if you use an empty provider "aws" {}
block inside the module? Using it is still recommended.
@herrkunstler @stefanthorpe Can you show as a minimum working example (MWE) where you still get this import
error?
@herrkunstler Which client secrets do you mean? You don't need to set the access_key
& secret_key
in the provider
. There are other options to provide AWS secrets. You also can access them from encrypted keystores easily with aws-vault, for example.
@Dominik-K We are using Azure. We are setting the client_secret field in the azurerm block dynamically. I don't think a vault solution helps us here; this is the way Terraform is authenticating to Azure.
@apparentlymart @Dominik-K - not all providers are set up to allow secrets configuration via env vars or aws-vault etc.
I can verify that dynamic configuration is the issue, when I manually set providers at the lowest level and leave them empty in modules it seems to allow importing without an issue so far. However, this means checking in very sensitive secrets for the services we utilize.
This issue has been a huge pain point for us, and as we're trialing TFE it's going to become an even bigger priority that it be resolved.
We're going on 15 months since opening this issue, and it's definitely playing a role in whether or not to move forward with Hashicorp as our provider. We're trialing other services with organizations that seem to have reasonable turnarounds on bug fixes as a result.
Can we get any kind of escalation on this asap? We've got about 20 days left in our trial. If we see some movement on this before then, it's going to be a pretty big plus in choosing TFE over other solutions. The inverse also applies.
Pretty much have decided to drop any hope in Terraform until it's a 1.0 product due to this. We're spending x10 more time trying to 'automate' with Terraform while getting stuck with incredibly weird bugs that aren't getting proper attention, versus just doing things by hand or other tools. Quite disappointed, but oh well.
Still having this issue with Terraform v0.11.7
The especially frustrating part is it even happens with completely unrelated dynamic providers. For example, a dynamic cloudflare provider blocking an otherwise successful import of an SQS queue.
Have had the same issue.
My workaround, since the resources that I had to import was not included in the module which caused the error, is to temporary comment the module just to finish the import.
Still broken for me in 0.11.7
. I'm in the process of bringing our environment into terraform, so this feature is super useful.
Workaround is to comment the whole configuration, import in root, and then use state mv
.
Just encountered the same issue. I have cloudflare and datadog keys being pulled from google kms. As a result, I'm unable to import any existing GCP records unless I comment out every cloudflare and datadog configuration, import, and then uncomment everything again :-(
another (bad) workaround: hardcode the credentials while running terraform
One way to implement this hardcoding hack is to dynamically write an override file with the credentials and then delete it after terraform is done. That worked for me, though obviously not ideal.
Workaround: I did a find/replace on my .terraform
directory, to replace every instance of ${var.aws_profile}
with my hard-coded AWS profile.
As a side note, terraform import
seems to _always_ use the AWS default
profile, even if the provider is configured otherwise, and even if AWS_PROFILE
is set otherwise.
I think I've been encountering this problem with the Google provider as well. There are a few resource types that we were able to import as they require specifying the project as part of the command, but many others are stuck with this problem
statefile nonsense like this is the single reason I'm not picking up Terraform, we simply can't have a single file be the sole failure point for managing anything we care about.
This ticket is over a year old. What's the current way to pull in existing resources?
@rasputnik the current way is trivial do not use variables in the provider configuration
provider "aws" { alias = "${var.region}" region = "${var.region}" }
provider "aws" { alias = "west" region = "us-west-2" }
This is module code, so I don't really have the option to hardcode it.
@sstarcher trivial is a way of putting it. What if those variables contain secrets you can't check into SCM? What if those variables are not knowable until runtime in some different environment? Or pretty much any legitimate usage of the concept of variables.
So yeah it's trivial to pull code in the sense that you can comment out a bunch of stuff and inline the variable contents in order to be able to import state, if you can even do that, otherwise it's not trivial at all.
@vascop Ahh I don't use any providers that don't allow me to store the credentials outside . For our AWS provider credentials are stored outside and should never be part of terraform. AWS natively pulls them from ~/.aws
I can't speak to other providers and what they support. If they support storing the credentials in a file or in a environment variable that would be the preferred method of usage.
@raphink typically you don't want to define providers in modules that information should be defined in the root terraform and not in the module.
@sstarcher See my initial comment from Oct 2017 regarding our specific scenario. We pull several secrets from the Vault that we use in our AWS terraform configs.
Easy examples are things like defining user-data for launch templates, when user data requires some secret. Or you need a secret to be injected into a lambda function as an environment variable. Or an LDAP administrator password that you want to set for AWS Directory service. These are just 3 example use cases of TF configuration requiring passwords, and our solution for not putting them in source control was using vault to inject them dynamically into sub-modules like:
We have a vault module which defines (from a secret named "terraform"):
output "terraform_secrets" {
value = "${data.vault_generic_secret.terraform.data}"
}
This then outputs variables that can be injected into other modules as such:
module "mymodule" {
mypassword= ${module.vault.terraform_secrets["MY_PASSWORD"]}
}
This approach works perfectly except when importing state. At which point we need to comment out all the dynamic secret definitions because of this bug. In-lining them is obviously not an option since it assumes that people have access to them in the first place (which is not the case) as well as it being super gross and risking committing secrets if you forget to revert your manual changes when importing. This is a problem also because it prevents us from having a build pipeline that supports importing state, where for terraform plan/apply we can and do run them in our CI instead of in dev's machines.
I see we treat secret data very differently we do not store the real passwords in terraform as they will end up in the statefile and we consider that very insecure. We create the resources with a temporary password and change the password out of band and tell terraform to ignore any password changes.
State files will be as insecure as the terraform backend you configure for it as well as the permissions needed to access it, right?
For our setup with CI doing all the work to provision TF - only it needs to be able to access and modify state. I think this is consistent with most injection of secrets in build pipelines but correct me if you think there might be some aspect of this that makes it more insecure for TF specifically, we might've missed something.
Yes, but you lose the access control and fine grained handling that you get via something like vault. If the user has access to just view the statefile they get access to any passwords stored inside and no way to break those apart.
Confirmed I'm still seeing this with
Terraform v0.11.13
Provider setup e.g.
provider "aws" {
region = "${var.region}"
}
The comment with the providers attribute approach works for me as a workaround and you can variable-ize it. The variables just have to be specified in the provider block in your local project and then you link that provider with the providers
attribute in the module block.
Just another +1 the day after this issue turned 28 months old. Still happening on 11.14.
+1
Just encountered it as well on v0.11.14
, I was able to import by temporarily hardcoding the region.
--- a/providers.tf
+++ b/providers.tf
@@ -1,5 +1,5 @@
provider "aws" {
- region = "${var.region}"
+ region = "eu-west-1"
version = "~> 1.60.0"
}
simply adding a .auto.tfvars
file with value for the region
worked for me
Still present v0.12.12
I'd like to add to this: The fortios provider contains the api token/key as part of the provider configuration so hardcoding the values isn't an option unless we want to leak our secrets:
https://www.terraform.io/docs/providers/fortios/index.html
i have project with several different providers. And import resources for me not possible. I even dont know how to apply your workaround guys for my case ))
github, cloudflare, postgresql provider issue. All these providers locate in a modules, so hard-code values no possible (. Thanks
@KursLabIgor Lately for my modules with internal providers I've been taking the extreme step of temporarily modifying the provider source in .terraform/modules
, then doing the imports. Afterward I destroy all of .terraform
and re-init for some semblance of sanity.
@ashafer01 could you a bit elaborate ?
i opened .terraform/modules
and here for example file with provider. github.tf
provider "github" {
#token = "${var.github_token}"
organization = "${var.repo_owner}"
}
what path i can change )?
@KursLabIgor you'll have to figure out what those variables need to correctly expand to for each individual import, or otherwise configure the provider to do what's needed. You'll probably have to trace through a bunch of code, it's tedious.
In my case I'm using the Kubernetes provider internal to a module, so when I need to do imports I comment out all of the variable provider parameters and set load_config_file = true
, then set up my kubeconfig to point to the cluster I'm working with by default.
Best of luck! Hopefully one day this issue will be resolved.
@ashafer01 Ah, i got your idea, thanks!
just wasted a lot of time on this, thought it was me...
+1
@apparentlymart Can we get a high-vis warning about this issue on the import docs? The language used there seems to contradict a lot of what's been said here, suggesting its easy to import your entire infrastructure, and that the import feature provides some kind of stability assurance.
I just spent a whole morning on this. Please fix this bug.
Encountered in v0.12.23. Please fix!
Encountered in v0.12.24
Seems like this is still a big issue for lots of people - come on GitHub team, get this prioritized!
3 years and counting.... note to self if you find this issue again in 1 year don't worry its not your fault, add +1 to yearly counter
So, https://www.terraform.io/docs/commands/import.html#provider-configuration says that provider configuration during import can depend on variables (but not data sources). yet this _does not work_ if the provider is in a module.
So, https://www.terraform.io/docs/commands/import.html#provider-configuration says that provider configuration during import can depend on variables (but not data sources). yet this _does not work_ if the provider is in a module.
Just ran into the same problem. I am using a module that has out puts. It runs the plan and the apply perfectly but it does not allow me to import or rm state.
this topic and info helped me solve import issue
https://medium.com/@lhk.igor/this-is-how-i-solved-problem-of-importing-resources-in-terraform-with-dynamic-provider-a2f255a9a303
I faced this issue as well, and after reading this thread and all the workarounds, and decided to first just try commenting basically all of my other terraform out before importing. The import worked perfectly, and after uncommenting everything again, was back to a clean plan.
Is there a way to manually create the state for the resource if it's not even related to providers?
I have the provider dynamically set in the module as well, but trying to import an existing kubernetes namespace... it's still failed on aws provider... which doesn't make any senses, since it's not even related...
This is an issue even when it is not the provider that is dynamically built, but instead if you have a module that contains dynamic variables that is generated in an upstream module and passed through as a variable to a downstream module it fails to import because the downstream never receives the upstream data correctly.
This makes it very difficult to import the resource.
Similar to this comment: https://github.com/hashicorp/terraform/issues/13018#issuecomment-632206868
We ran into this issue as well. Any time we had a module with a provider
block nested in it, and that provider
block used any dynamic data (e.g., set region
to a variable that was passed in), import
would no longer work.
I just updated Terragrunt with a new aws-provider-patch
command, which is an experimental, temporary workaround for this issue. You can run a command like this:
terragrunt aws-provider-patch --terragrunt-override-attr region=eu-west-1
And Terragrunt will:
terraform init
to download the code for all your modules into .terraform/modules
..terraform/modules
, find AWS provider
blocks, and hard-code the region
parameu-west-1
for each one. You can of course set other params to override using the --terragrunt-override-attr
option.Once you do this, you'll _hopefully_ be able to run import
on that module. After that, you can delete the modified .terraform/modules
and go back to normal.
Note that you should be able to use this even if you're not a Terragrunt user. Just add an empty terragrunt.hcl
to any Terraform module you're using (e.g., run touch terragrunt.hcl
), and run the same terragrunt aws-provider-patch
command as above.
This is obviously an ugly, hacky solution, but as this bug has been open for ~3 years now, I figured an ugly solution is better than none. The fix is available in Terragrunt v0.23.40 and above. Give it a try and let me know if it works! PRs to improve it further are also welcome.
Most helpful comment
3 years and counting.... note to self if you find this issue again in 1 year don't worry its not your fault, add +1 to yearly counter