Terraform v0.13.2
+ provider registry.terraform.io/hashicorp/azurerm v2.26.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.1
Terraform should import the existing resource into the state file
Statefile is not updated, error message is displayed.
> terraform import module.resource_group_1.azurerm_resource_group.resource_group /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resource-group-1
module.resource_group_1.azurerm_resource_group.resource_group: Importing from ID "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resource-group-1"...
module.resource_group_1.azurerm_resource_group.resource_group: Import prepared!
Prepared azurerm_resource_group for import
module.resource_group_1.azurerm_resource_group.resource_group: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resource-group-1]
Error: Invalid provider configuration
on resource_group_1/resource_group_1.tf line 17:
17: provider kubernetes {
The configuration for
module.resource_group_1.provider["registry.terraform.io/hashicorp/kubernetes"].kubernetes_cluster
depends on values that cannot be determined until apply.
terraform init
terraform import module.resource_group_1.azurerm_resource_group.resource_group /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resource-group-1
or
terraform import module.resource_group_1.null_resource.test_resource test
terraform plan
works
This was also an issue in terraform v0.13.0.
i think this is similar to what i'm experiencing in https://github.com/hashicorp/terraform/issues/25574
Thanks for reporting this, @demonemia, and thanks for the reproduction repository. I'm able to reproduce the issue with Terraform 0.13.3.
I'm not sure what the cause is, but there are several other similar import bugs due to the import process not fully evaluating the configuration. Possibly related: #26258.
Still going on in 13.4 with AWS and trying to import iam roles and route 53 zones
@jdtommy Did it work?
Is there some workaround?
I had a similar issue, but with kubernetes provider
terraform import module.XXXXXXX.kubernetes_namespace.istio_system istio-system
Error: Invalid provider configuration
on ../../modules/XXXXXXX/main.tf line 22:
22: provider "kubernetes" {
The configuration for
module.XXXXXXXXXX.provider["registry.terraform.io/hashicorp/kubernetes"].YYYYYYY
depends on values that cannot be determined until apply.
And my workaround was changing the provider from
provider "kubernetes" {
load_config_file = false
version = " ~> 1.11.0"
host = "https://${module.XXXXXXXXXX.endpoint}"
cluster_ca_certificate = base64decode(module.XXXXXXXX.cluster_ca_certificate)
token = module.XXXXXXXXXXX.access_token
}
to
provider "kubernetes" {
load_config_file = true
version = " ~> 1.11.0"
}
the import worked and I switched the code back after.
@fernandoiury, same workaround did not work for me
Just had the same issue with kubernetes and helm providers. My workaround was to temporarily comment the entire "offending" provider blocks, do the import then uncomment them back.
Normal terraform operations were possible after that.
Just had the same issue with kubernetes and helm providers. My workaround was to temporarily comment the entire "offending" provider blocks, do the import then uncomment them back.
Normal terraform operations were possible after that.
This seems to me very suspicious. Is it another bug?
This seems to me very suspicious. Is it another bug?
No idea but I did get the errors:
The configuration for provider["registry.terraform.io/hashicorp/kubernetes"]
depends on values that cannot be determined until apply.
...
The configuration for provider["registry.terraform.io/hashicorp/helm"]
depends on values that cannot be determined until apply.
Both providers use interpolation with the base64decode function for the certificates
Same issue here. We have a module foo
with a nested provider
block, and that provider
block depends on a resource (note: code snippet below is simplified):
provider "aws" {
alias = "logs"
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.child_accounts["logs"].id}:role/foo"
}
}
We use module foo
in some code, along with several other modules such as bar
:
module "foo" {
source = "../foo"
# ... params ...
}
module "bar" {
source = "../bar"
# ... params ...
}
plan
and apply
work fine. However, when I try to run import
_on a resource in bar
_ (i.e., something totally unrelated to foo
) with Terraform 0.13.4, I get this error:
$ terraform import 'module.bar.aws_iam_user.xxx' 'xxx'
module.bar.aws_iam_user.xxx xxx
module.bar.aws_iam_user.xxx: Importing from ID "xxx"...
module.bar.aws_iam_user.xxx: Import prepared!
Prepared aws_iam_user for import
module.bar.aws_iam_user.xxx: Refreshing state... [id=xxx]
Error: Invalid provider configuration
on ../../modules/foo/main.tf line 59:
59: provider "aws" {
The configuration for
module.foo.provider["registry.terraform.io/hashicorp/aws"].logs
depends on values that cannot be determined until apply.
Note that import
worked fine with Terraform 0.12.29
. Also, commenting out the assume_role
makes import
work.
BTW, this looks fairly similar to https://github.com/hashicorp/terraform/issues/13018, although it seems 0.13 has made that issue even worse.
Note that, as an ugly/hacky workaround for #13018, we added an aws-provider-patch
command to Terragrunt, as described here. I'm now updating that command to support nested blocks in this PR as a workaround for the new issue described here.
OK, the updated terragrunt awss-provider-patch
workaround is now available in v0.25.4.
terraform 0.13.5 seems to have fixed this for plan
& apply
, but not for import
I had a similar issue, but with kubernetes provider
terraform import module.XXXXXXX.kubernetes_namespace.istio_system istio-system Error: Invalid provider configuration on ../../modules/XXXXXXX/main.tf line 22: 22: provider "kubernetes" { The configuration for module.XXXXXXXXXX.provider["registry.terraform.io/hashicorp/kubernetes"].YYYYYYY depends on values that cannot be determined until apply.
And my workaround was changing the provider from
provider "kubernetes" { load_config_file = false version = " ~> 1.11.0" host = "https://${module.XXXXXXXXXX.endpoint}" cluster_ca_certificate = base64decode(module.XXXXXXXX.cluster_ca_certificate) token = module.XXXXXXXXXXX.access_token }
to
provider "kubernetes" { load_config_file = true version = " ~> 1.11.0" }
the import worked and I switched the code back after.
This workaround worked for me, I was able to import my kubernetes clusters.
Terraform v0.13.5
Has anyone a workaround for aws?
In the end, the solution I found better was to separate the terraform states across several steps (indeed harder but it does solves many problems).
In summary, it would be to
I will try to create a medium article for it, but in summary these are the steps that I've come up with for a long term solution.
I was able to get a workaround for my scenario in aws, hopefully it will help someone else until this is resolved.
I have a module aws-account
that provisions new accounts with aws_organizations_account
and then uses an aws provider with string interpolation for the assume role, taking the account id from aws_organizations_account
. High level looks something like this:
module/aws-account
main.tf
+ providers.tf
contents:
resource "aws_organizations_account" "this" {
# resource content...
}
provider "aws" {
alias = "this"
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.this.id}:role/OrganizationAccountAccessRole"
}
}
# bootstrap some account resources with `aws.this` provider.
I can't really comment out this provider or the assume role without effecting other consumers of the module, so I did the following.
# Need to import into 'module.aws_account.aws_organizations_account.this'
module "aws_account" {
source = "module/aws-account"
# ...
}
aws_organizations_account
resource "aws_organizations_account" "temp" {}
# module "aws_account" {
# source = "module/aws-account"
# # ...
# }
terraform import aws_organizations_account.temp xxxxxxxxxxxx
terraform state mv aws_organizations_account.temp module.aws_account.aws_organizations_account.this
Hope it helps.
Most helpful comment
I had a similar issue, but with kubernetes provider
And my workaround was changing the provider from
to
the import worked and I switched the code back after.