Terraform: "Error: Provider configuration not present" when aliased provider is used

Created on 23 May 2019  路  26Comments  路  Source: hashicorp/terraform

Terraform Version

Terraform v0.12.0

With terraform version 0.11.10, the files below work as expected.

Terraform Configuration Files

initially the providers section in the config below was absent, but it was added to try and troubleshoot the issue. The same results happen if it is absent or present.

provider "aws" {
  region = "us-west-2"
  alias  = "env"
}

provider "aws" {
  region = "us-west-2"
}

provider "google" {
  credentials = "${file("gcp.json")}"
  region      = "us-central1"
  project     = "orv5-236500"
}

module "network" {
  source = "../../../../terraform-modules/gcp/network"

  cluster = "testpad8"
  region  = "us-central1"
  domain  = "scratchpad.objectrocket.cloud"

  providers = {
    aws.env = "aws.env"
    google  = "google"
    aws     = "aws"
  }
}

The network module is the first part we're porting and testing with TF12. The locals/inputs are in different files in the module.

# DNS Setup

# Parent Zone in Route53 to create subdomains in
data "aws_route53_zone" "main" {
  provider = "aws.env"
  name     = "${var.domain}"
}

# DNS Zone in GCP for the cluster
resource "google_dns_managed_zone" "cluster" {
  name        = "${local.cluster_domain}"
  dns_name    = "${local.cluster_domain}"
  description = "Domain for CLUSTER: ${var.cluster}"
  visibility  = "public"

  labels = "${merge(var.extra_labels,
    map("Name", "${local.cluster_domain}"),
  map("launchpad_cluster", "${var.cluster}"))}"
}

# DNS delegation records in the main environment domain
resource "aws_route53_record" "environment-ns" {
  provider = "aws.env"
  zone_id  = "${data.aws_route53_zone.main.zone_id}"
  name     = "${local.cluster_domain}"
  type     = "NS"
  ttl      = "30"

  records = [
    "${google_dns_managed_zone.cluster.name_servers.*}",
  ]
}

# Create a zone for mongo product to publish records to
resource "google_dns_managed_zone" "mongo" {
  name        = "m.${local.cluster_domain}"
  dns_name    = "m.${local.cluster_domain}"
  description = "Domain for mongo records in CLUSTER: ${var.cluster}"
  visibility  = "public"

  labels = "${merge(var.extra_labels,
    map("Name", "${local.cluster_domain}"),
  map("launchpad_cluster", "${var.cluster}"))}"
}

# Create a delegation for the m.$var.cluster.$environment domain
resource "google_dns_record_set" "environment-mongo-ns" {
  managed_zone = "${google_dns_managed_zone.cluster.name}"
  name         = "m.${google_dns_managed_zone.cluster.dns_name}"
  type         = "NS"
  ttl          = "30"

  rrdatas = [
    "${google_dns_managed_zone.mongo.name_servers.*}",
  ]
}

# Create the wildcard in the mongo domain
resource "google_dns_record_set" "mongo-wildcard" {
  managed_zone = "${google_dns_managed_zone.mongo.name}"
  name         = "*"
  type         = "CNAME"
  ttl          = "30"

  rrdatas = ["${format("ingress.%s", local.cluster_domain)}"]
}

Debug Output

https://gist.github.com/ephur/e8ce912655adcb1909c9b565efe2b84a

Crash Output

N/A error message is:

Error: Provider configuration not present

To work with module.network.aws_route53_record.environment-ns its original
provider configuration at module.network.provider.aws.env is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy module.network.aws_route53_record.environment-ns,
after which you can remove the provider configuration again.

Expected Behavior

Plan should have executed, adding a zone in GCP, and NS records in AWS with delegations to the subdomain created in GCP.

Actual Behavior

Error message indicates a state that does not exist, no prior state existed.

Steps to Reproduce

This happens when running

terraform init 
terraform plan

Additional Context

We do run terraform via a wrapper, however in this case the wrapper is not used, and it's just a test to call the module to ensure the module works as expected. The test is run manually.

References

I found similar issues but it looked like in each of those cases the provider configuration was being done inside of the module. In our case the provider configuration is done in the main terraform and passed into the module.

bug config v0.12 waiting for reproduction

Most helpful comment

upgrade from 0.11.3 to 0.11.14 not received any errors.
upgrading from version: 0.11.14 to 0.12.29 throwing below error, tried different ways to work around to resolve by following all the git hub suggested issues nothing worked out.
awsproviderversion=2.50.0

_variables.tf_

variable "dns_route53_hosted_zone_id" {
  default = "ZASDF123JJKL123"           # used in resource
  type    = string
}
variable "client_id" {
  default     = "dev1"                  # used in resource
  type        = string
}

dual-node-vpc-module

_aws.route53.tf_

resource "aws_route53_record" "services" {
  alias = "alfahostedzone"                                         # resource is using provider alfahostedzone
  zone_id = "${var.dns_route53_hosted_zone_id}"
  name = "${lower(var.client_id)}-services.${lower(var.dns_domain)}"
  type = "A"
  ttl = "300"
  records = ["${aws_instance.linux.private_ip}"]
}

resource "aws_route53_record" "db" {
  alias = "alfahostedzone"                                       # resource is using provider alfahostedzone
  zone_id = "${var.dns_route53_hosted_zone_id}"
  name = "${lower(var.client_id)}-db.${lower(var.dns_domain)}"
  type = "CNAME"
  ttl = "300"
  records = ["${aws_db_instance.primary.address}"]
}
 ...............
 ...............

dual-node-vpc-template

_aws.vpc.tf_

provider "aws" {
  allowed_account_ids = ["01234567890"]             # dev1 account under main account        
  assume_role {
    external_id = "${dev1}"
    role_arn     = "${arn:aws:iam::01234567890:role/Myapp-Terraform-Deployment}"
    session_name = "Terraform_Deployment"
  }
  region      = eu-west-1
  version = 2.50.0
}

provider "aws" {
  alias = "alfahostedzone"                             # this is called in module and in resource
  allowed_account_ids = ["312793456789"]                 # main account
  assume_role {
    external_id = "${dev1}"
    role_arn = "${arn:aws:iam::312793456789:role/Access_To_Myapp_Dot_Net_Hosted_Zone}"
    session_name = "TerraformAccessToRemoteHostedZone"
  }
  region      = "eu-west-1"
  version = "2.50.0"
}

module "dual-node-vpc-template" {
  source = "git::ssh://[email protected]....."

  providers = {
    aws = aws
    aws.alfahostedzone= aws.alfahostedzone
  }
  az_1 = "${module.environment.az_1["${lower(module.region.region)}"]}"
  ...............
  ...............
}

even after mentioning the providers under the module, still, the error persists a lot.
tfplan output

Error: Provider configuration not present

To work with module.dual-node-vpc-template.aws_route53_record.services
its original provider configuration at
module.dual-node-vpc-template.provider.aws.alfahostedzone is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.dual-node-vpc-template.aws_route53_record.services, after which
you can remove the provider configuration again.

Error: Provider configuration not present

To work with module.dual-node-vpc-template.aws_route53_record.example-db-services
its original provider configuration at
module.dual-node-vpc-template.provider.aws.alfahostedzone is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.dual-node-vpc-template.aws_route53_record.example-db-services, after which
you can remove the provider configuration again.

  ...............
  ...............

All 26 comments

From the docs, it mentions that to have explicit providers, the module should explicitly declare it requires providers.

In the providers map, the keys are provider names as expected by the child module, while the values are the names of corresponding configurations in the current module. The subdirectory ./tunnel must then contain proxy configuration blocks like the following, to declare that it requires configurations to be passed with these from the providers block in the parent's module block

Basically, declare a bare provider in your module, and you should be fine. This fixes the issue for us.

It would be good if this is mentioned in the upgrade guide that module providers are now explicitly required, and that this is added to the checklist, as the above paragraph is fairly easy to miss.

Basically, declare a bare provider in your module, and you should be fine.

How do you mean?

There are further explanations in this Issue, however this comment in particular will be useful for you @Jakexx360
https://github.com/hashicorp/terraform/issues/21472#issuecomment-497508239

Basically, declare a bare provider in your module, and you should be fine. This fixes the issue for us.

@ScriptMyJob For us, cutting the extra aliased provider block and pasting it inside the module that uses it, solved the problem.

@cybojenix I believe that @dimisjim meant you. ^^

@dimisjim can you share an example?

We're passing the provider down the module already:

# main.tf
provider "aws" {
  alias  = "dev-us-west-2"
  region = "us-west-2"

  version             = "~> 2.26"
  profile             = "dev"
  allowed_account_ids = ["xxxxx"]
}

module "iam" {
  source = "./iam"

  providers = {
    aws = aws.dev-us-west-2
  }
}

@scalp42

main.tf

variable "region" {
  default = "eu-west-1"
}

provider "aws" {
  region  = var.region
  version = "~> 2.19.0"
}

provider "aws" {
  alias  = "us-east-1"
  region = "us-east-1"
}

inside a module that needs to use a provider in a different region:

provider "aws" {
  alias  = "us-east-1"
  region = "us-east-1"
}

In 0.11, there was no need for specifying the alias provider twice. Apparently, the above was the only way I could get it to work in 0.12.

I've the same issue when I use vSphere Provider

came across similar issue and adding a proxy provider block in my module solved it for me

Parent module config:

provider "aws" {
alias = "test"
}

resource ... ... {
.....
....
provider = aws.test
}

code calling the parent module :
provider "aws" {
alias = "test"
region = var.region

......
.....
}

module ....... {
source =
providers = {
awx.test = aws.test
}

@dimisjim , I used this same example, but the default provider is overriding the provider declared in the module. Can you please be of help in any way? Thank you in advance

@scalp42 , Have you gotten this working?

I am using github provider

@fairhaven Yes, it works when redeclaring the provider downstream.

@fairhaven It will override it by default. You have to explicitly specify the aliased provider in your resource block* and/or use the --provider flag when you import a resource into that module that needs to be in that aliased region.

  • You should have an argument like this in the resource:
provider = aws.us-east-1

provider.tf

```
provider "github" {
organization = xyz
}

provider "github" {
alias = "test"
organization = "abc"
}

module "my_name"{
source = ./test
provider = {
github = github.test
}
```

So I want the module to use the second provider with alias and not the first one which served as the default provider but the default provider always override the provider with alias. Thank you for your prompt response.

@dimisjim

@fairhaven Yes, it works when redeclaring the provider downstream.

can this be re-worked with github provider?. Thank you for your response

@fairhaven

The provider block should be outside of the module definition, but inside the module source itself.
So in your case, the provider alias should be inside the test folder.

PS: Try to reduce the amount of comments you post in issues. It can be annoying as people receive emails all the time.

I fixed it like this:

main.tf

provider "aws" {
  region = "eu-central-1"
}
provider "aws" {
  alias = "useast"
  region = "us-east-1"
}
module "submodule" {
  source = "./submodule"
  providers = {
    aws = aws
    aws.useast = aws.useast
  }
}

and in the module main.tf

provider "aws" {}
provider "aws" {
  alias = "useast"
}

After that I could use a different provider in my aws resources like outside the module.

Same issue here with terraform 0.12.24. I have a main.ft with a provider and the same provider with alias.

provider "oci" {
  version          = ">= 3.27.0"
  tenancy_ocid     = var.tenancy_ocid
  region           = var.region
}

provider "oci" {
  alias            = "home-region"
  version          = ">= 3.27.0"
  tenancy_ocid     = var.tenancy_ocid
  region           = var.region
}
I've using modules to split the terraform script and I've passing providers element in the module definition.
module "CreateCompartment" {
  source                  = "./module-compartment"
  providers               = {
    oci = oci.home-region
  }
  tenancy_ocid            = var.tenancy_ocid
  compartment_name        = var.compartment_name
  compartment_description = var.compartment_description
}
I've tested creating a provider in the source of the module,
variable "tenancy_ocid" {}
variable "compartment_description" {}
variable "compartment_name" {}

provider oci {
     alias = oci.home-region
}

//Create a new compartment and not destroy as default with
//enable_delete = false
resource "oci_identity_compartment" "CreateCompartment" {
  compartment_id = var.tenancy_ocid
  description    = var.compartment_description
  name           = var.compartment_name
  enable_delete  = false
}
I've tested without a provider and the result is the same
variable "tenancy_ocid" {}
variable "compartment_description" {}
variable "compartment_name" {}

//Create a new compartment and not destroy as default with
//enable_delete = false
resource "oci_identity_compartment" "CreateCompartment" {
  compartment_id = var.tenancy_ocid
  description    = var.compartment_description
  name           = var.compartment_name
  enable_delete  = false
}
Error: Provider configuration not present

To work with
module.CreateCompartment.oci_identity_compartment.CreateCompartment its
original provider configuration at provider.oci.home-region is required, but
it has been removed. This occurs when a provider configuration is removed
while objects created by that provider still exist in the state. Re-add the
provider configuration to destroy
module.CreateCompartment.oci_identity_compartment.CreateCompartment, after
which you can remove the provider configuration again.

@oraclespainpresales Try this
`variable "tenancy_ocid" {}
variable "compartment_description" {}
variable "compartment_name" {}

//Create a new compartment and not destroy as default with
//enable_delete = false
resource "oci_identity_compartment" "CreateCompartment" {
provider = oci. # <------------- ADD PROVIDER REFERENCE HERE. !!!!!!!
compartment_id = var.tenancy_ocid
description = var.compartment_description
name = var.compartment_name
enable_delete = false
}`

I have not trested this method with the oci provider or this resource type, but this pattern works for Google provider resources.

I reproduced this on 0.13.0 beta 1 using the reproduction case scripted at https://github.com/danieldreier/terraform-issue-reproductions/tree/master/21416

Hi,
I am having similar issue with terraform 0.12.26. I have tried all the solutions I can find but still it does not solve it. What is the status of it?

@danieldreier yes, I don't seem to be able to use an explicit provider on a module in 0.13 beta 2 either but am unsure if its me or tf.

Looking at your repo, you need to add a file into modules/examples/providers.tf

provider "aws" {
}

provider "aws" {
  alias = "us-west-2"
}

and then that example works fine (Terraform v0.13.0-beta2, provider registry.terraform.io/hashicorp/aws v2.67.0) , as outlined in the doco: https://www.terraform.io/docs/configuration/modules.html#passing-providers-explicitly

Indeed, it does seem like @danieldreier's reproduction is missing a proxy configuration block as documented in Passing Providers Explicitly.

While improving the configuration language's modelling of required non-default provider configurations in child modules is something we'd like to do eventually, with Terraform's current design that is _supposed_ to fail, though the specific error message it returns leaves a lot to be desired because Terraform seems to be confusing it with the case where a provider configuration was _formerly_ present and has now been removed.

I'd be curious to see if there's a reproduction of this problem in situations where the proxy configuration block _is_ present. If not, perhaps we can resolve this bug for now by improving that error message to talk specifically about proxy configuration blocks, with the larger change to rework how the configuration language represents that situation left for a separate enhancement issue.

upgrade from 0.11.3 to 0.11.14 not received any errors.
upgrading from version: 0.11.14 to 0.12.29 throwing below error, tried different ways to work around to resolve by following all the git hub suggested issues nothing worked out.
awsproviderversion=2.50.0

_variables.tf_

variable "dns_route53_hosted_zone_id" {
  default = "ZASDF123JJKL123"           # used in resource
  type    = string
}
variable "client_id" {
  default     = "dev1"                  # used in resource
  type        = string
}

dual-node-vpc-module

_aws.route53.tf_

resource "aws_route53_record" "services" {
  alias = "alfahostedzone"                                         # resource is using provider alfahostedzone
  zone_id = "${var.dns_route53_hosted_zone_id}"
  name = "${lower(var.client_id)}-services.${lower(var.dns_domain)}"
  type = "A"
  ttl = "300"
  records = ["${aws_instance.linux.private_ip}"]
}

resource "aws_route53_record" "db" {
  alias = "alfahostedzone"                                       # resource is using provider alfahostedzone
  zone_id = "${var.dns_route53_hosted_zone_id}"
  name = "${lower(var.client_id)}-db.${lower(var.dns_domain)}"
  type = "CNAME"
  ttl = "300"
  records = ["${aws_db_instance.primary.address}"]
}
 ...............
 ...............

dual-node-vpc-template

_aws.vpc.tf_

provider "aws" {
  allowed_account_ids = ["01234567890"]             # dev1 account under main account        
  assume_role {
    external_id = "${dev1}"
    role_arn     = "${arn:aws:iam::01234567890:role/Myapp-Terraform-Deployment}"
    session_name = "Terraform_Deployment"
  }
  region      = eu-west-1
  version = 2.50.0
}

provider "aws" {
  alias = "alfahostedzone"                             # this is called in module and in resource
  allowed_account_ids = ["312793456789"]                 # main account
  assume_role {
    external_id = "${dev1}"
    role_arn = "${arn:aws:iam::312793456789:role/Access_To_Myapp_Dot_Net_Hosted_Zone}"
    session_name = "TerraformAccessToRemoteHostedZone"
  }
  region      = "eu-west-1"
  version = "2.50.0"
}

module "dual-node-vpc-template" {
  source = "git::ssh://[email protected]....."

  providers = {
    aws = aws
    aws.alfahostedzone= aws.alfahostedzone
  }
  az_1 = "${module.environment.az_1["${lower(module.region.region)}"]}"
  ...............
  ...............
}

even after mentioning the providers under the module, still, the error persists a lot.
tfplan output

Error: Provider configuration not present

To work with module.dual-node-vpc-template.aws_route53_record.services
its original provider configuration at
module.dual-node-vpc-template.provider.aws.alfahostedzone is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.dual-node-vpc-template.aws_route53_record.services, after which
you can remove the provider configuration again.

Error: Provider configuration not present

To work with module.dual-node-vpc-template.aws_route53_record.example-db-services
its original provider configuration at
module.dual-node-vpc-template.provider.aws.alfahostedzone is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.dual-node-vpc-template.aws_route53_record.example-db-services, after which
you can remove the provider configuration again.

  ...............
  ...............

I've removed the "confirmed" label on here because as folks pointed out, my reproduction case was incorrect. In order to move forward on this, we need a reproduction of this problem in situations where the proxy configuration block is present, like Martin described, on 0.13.x or the 0.14 alpha. If someone can make a PR to update the example I'd tried to make (https://github.com/danieldreier/terraform-issue-reproductions/tree/master/21416) that would be a great place to put it.

I've had a odd encounter with this error in a weird scenario -

  • I use 0.11.xx terraform
  • I use AWS provider.
  • ansible.tf has this -
resource "aws_security_group" "ssh" {
... some good security group here without a word about provider
... it knows default provider
}
  • I upgrade to 0.13.xx terraform
  • someone deletes/moves ansible.tf to ansible.tfback, but the security group still exists in the tfstate file.
    Now it keeps throwing up the error paragraph about provider configuration missing.

I add the security group config to one of the tf files in that directory, it's fine without even adding a provider block. It uses default provider block! Is it supposed to understand that this scenario has nothing to do with provider, but entire config mismatch between state and tf files?

@danieldreier I just ran your issue reproducer fresh from cloning it:

./run.sh 
+ terraform init
Initializing modules...
- example in modules/example

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.22.0...
- Installed hashicorp/aws v3.22.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
+ terraform validate

Error: Provider configuration not present

To work with module.example.aws_vpc.tf-21416-us-west-2 its original provider
configuration at
module.example.provider["registry.terraform.io/hashicorp/aws"].us-west-2 is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.example.aws_vpc.tf-21416-us-west-2, after which you can remove the
provider configuration again.

EDIT: the above was with 0.14.2, I also retried after upgrading to 0.14.3
EDIT 2: I retried with 0.13.5 and it works if you add a provider into the module (that doesn't work with 0.14.x)
EDIT 3: Neither 0.13 nor 0.14 works with module for_each: https://github.com/danieldreier/terraform-issue-reproductions/pull/9

Was this page helpful?
0 / 5 - 0 ratings