I have trouble working with modules since upgrading to 0.11.x. Currently I have nested modules (i.e. a root configuration referencing a child module, which references another child module), and when I try to flatten this structure I get this error:
configuration for module.x<module.y>.provider.aws is not present; a provider configuration block is required for all operations
I've checked https://github.com/hashicorp/terraform/issues/16824 and https://github.com/hashicorp/terraform/issues/16826, as well as https://www.terraform.io/upgrade-guides/0-11.html#interactions-between-providers-and-modules and https://www.terraform.io/docs/modules/usage.html#providers-within-modules, but still not sure how to get out of this situation. I tried defining an empty provider block in the modules, a block with version constraints, no block at all. Tried passing a provider from top level configuration both explicitly and implicitly. Nothing works, I'm still getting provider.aws is not present; a provider configuration block is required for all operations.
0.11.3
terraform plan should give a summary of planned changes (or, ideally, no changes at all).
I get
configuration for module.x<module.y>.provider.aws is not present; a provider configuration block is required for all operations
Hi @endofcake! Sorry for this frustrating stalemate.
Some background information about what Terraform is doing here may help:
During each Terraform operation, Terraform begins by matching each resource and data block to exactly on provider block (or to an implied empty block). After an apply is complete, Terraform remembers (in the state) which provider configuration most recently matched with each resource, which is then used in situations where the resource or data block has been removed from configuration, and thus there is no current specification for which provider to use.
The error you've seen here is saying that you've removed a resource from the configuration _and_ you've removed the provider configuration it was associated with, so now Terraform is stuck and cannot perform any operations (refresh, destroy) on that resource.
You should be able to avoid this error by taking the following steps:
provider blocks into the root module, de-duplicating or aliasing them as appropriate.providers maps to any module blocks that now need a customized set of providers.provider arguments in resource or data blocks.resource or data blocks yet.terraform applyAfter the terraform apply, Terraform's mapping from resources to providers in the state should be updated to reflect the new provider configuration locations. If you then wish to make any changes to the names or locations of resource and data blocks then this should work as long as their corresponding provider blocks are retained.
Thanks for such a quick response, @apparentlymart .
I've successfully removed provider blocks from both modules and ran apply. So far so good.
I checked tfstate file, and it doesn't have any specific constraints on providers. My provider block looks like this
provider "aws" {
version = "~> 1.9"
region = "${var.region}"
}
I'm not passing aws provider to modules explicitly, so I'm assuming it must be passed implicitly, i.e. be the same version and in the same region.
_However_, when I try to delete the second level module (by replacing the first level module with one which doesn't reference another module), I'm still getting
configuration for module.x<module.y>.provider.aws is not present; a provider configuration block is required for all operations
So I'm essentially still where I'm started.
Is there a way to get out of this impasse?
Thanks!
Hi @endofcake. Thanks for the extra information.
After you've run terraform apply at least once with the provider blocks moved, every resource in the state file should have a "provider" property which specifies the provider location. For example, I just ran terraform apply in a test configuration I have on my system and here is one of the resource objects in my state:
"tls_self_signed_cert.ca": {
"type": "tls_self_signed_cert",
"depends_on": [
"tls_private_key.ca"
],
"primary": {
"id": "163165104287444635368714508980552190703",
"attributes": {
(omitted for brevity)
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": "provider.tls"
}
In this case the "provider" property is set to provider.tls, which means this resource is handled by the provider "tls" block in my root module. (In this particular case I don't have an explicit block, so Terraform creates an implicit one with no configuration.)
Can you look in your state file and see if your resources have "provider" set, and if so share what they are set to? (Feel free to redact specific details if you want; the key is to see what level of nesting is specified.)
There is some fallback behavior present to allow Terraform 0.11 to consume a state from prior to 0.11 where that field was not used in the same way, so if the property doesn't appear _at all_ in your state then Terraform may be applying that fallback behavior, but that should've got upgraded after the first terraform apply with 0.11.
Thanks for your help @apparentlymart ! I think I finally got past this stumbling block. Not sure what worked exactly, I suspect the state was still at 0.11.0 version. After I upgraded to 0.11.3 and ran apply all seems to work fine now.
I had to move a provider block. While doubtless not approved, this worked:
Substitute every mention of the old provider location with a new one in a local copy of tfstate. In my case:
sed -i 's/"module.blueshift.provider.aws"/"provider.aws"/g' terraform.tfstate
Nuke the local .terraform (may not be needed)
terarform init ; terraform plan - these worked now, they had not beforeI have run a terraform destroy for all of my infrastructure. Running it again indicates that I have no resources in my current state (0 to destroy).
However, I am still getting this error.
I even tried manually deleting the state from the backend (s3 and dynamodb) and redoing a local terraform init. Same error. Terraform 0.12.1 on the latest AWS plugin.
Update: our fix was to move the provider into the module that is doing the certificate creation/validation. I know you are not supposed to do this and it makes it hard to delete the module later; however, this was the only way we could find to get it to work at all.
So our module now contains
// cloudfront certificates must be created in us-east-1
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
and that is referenced in the certificate and validation like
resource "aws_acm_certificate" "cert" {
domain_name = "${var.domain_name}"
validation_method = "DNS"
provider = "aws.us-east-1"
...
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I had to move a provider block. While doubtless not approved, this worked:
Substitute every mention of the old provider location with a new one in a local copy of tfstate. In my case:
sed -i 's/"module.blueshift.provider.aws"/"provider.aws"/g' terraform.tfstate
Nuke the local .terraform (may not be needed)
terarform init ; terraform plan- these worked now, they had not before