Terraform: Provider configuration not passed to grandchild modules

Created on 23 Jul 2015  ยท  9Comments  ยท  Source: hashicorp/terraform

When calling a module from another module, provider configuration does not seem to be propagated all the way down to the grandchild. This works as expected with a single level of modules, but when 2 or more layers are involved, the issue rears its head.

Terraform v0.6.2-dev (5a15c02cbbea27d3f8345b5fe0f348a08a24fdb9)

main.tf

provider "aws" {
  access_key = "zip"
  secret_key = "zap"
  region     = "us-east-1"
}

module "foo" {
  source = "./foo"
}

foo/main.tf

module "bar" {
  source = "./bar"
}

foo/bar/main.tf

resource "aws_instance" "baz" {
  ami           = "ami-4c7a3924"
  count         = 1
  instance_type = "t2.micro"
}

Result:

$ terraform plan
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * module.foo.module.bar.provider.aws: "region": required field is not set
  * module.foo.module.bar.provider.aws: "access_key": required field is not set
  * module.foo.module.bar.provider.aws: "secret_key": required field is not set

The other weird thing is that the error is not always the same. I've seen 3 (!) different outputs for the same input. The others include an interactive prompt for the credentials:

$ terraform plan
provider.aws.access_key
  The access key for API operations. You can retrieve this
  from the 'Security & Credentials' section of the AWS console.

  Enter a value:

Or an invalid token error, which I would expect for this example:

$ terraform plan
Refreshing Terraform state prior to plan...

Error refreshing state: 1 error(s) occurred:

* 1 error(s) occurred:

* InvalidClientTokenId: The security token included in the request is invalid.
    status code: 403, request id: [11cdaeed-315d-11e5-9c83-7d4a35a9d7b0]
bug core

Most helpful comment

I seem to still be having this issue on 0.11.8

All 9 comments

@ryanuber, just tried to reproduce this against 5194eb4e2, with the following configuration:

main.tf

provider "aws" {
  region     = "us-east-1"
}

module "foo" {
  source = "./foo"
}

foo/main.tf

module "bar" {
  source = "./bar"
}

foo/bar/main.tf

resource "aws_vpc" "baz" {
    cidr_block = "10.0.0.0/16"
}

I'm getting mixed results, similarly to yours. Destroy in particular seems to want credentials each time. Definitely seems like there's an issue somewhere here, I'll investigate further.

There's something else at play here as well. I removed my "middle" module to work around this bug, and this is happening:

With Terraform 0.6.6:

$ TF_VAR_foo=bar ~/Downloads/terraform_0.6.6_darwin_amd64/terraform plan
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * aws_route.master: Provider doesn't support resource: aws_route

Above succeeds if I remove the aws_route resource. But since I need it, I switch to GH master, the latest Terraform v0.6.7-dev (24ee56326d1b7eccda046524fe7129302d6556fd):

$ TF_VAR_foo=bar terraform plan
Error configuring: 6 error(s) occurred:

* provider.aws: missing dependency: var.secret_key
* provider.aws: missing dependency: var.region
* provider.aws: missing dependency: var.access_key
* aws_security_group.ssh: missing dependency: var.foo
* module.vpc: missing dependency: var.foo
* module.kb_master: missing dependency: var.foo

It looks like no top-level variables are being passed down - output shows a failure in a provider, a module, and a plain resource.

For the provider level variables I have used a workaround to get this going for the time being.

I set the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION environment variables and then run terraform and the provider specific vars get passed through.

Not a fix but a good workaround to date.

For the record, resource "aws_key_pair" "noop" { count = 0 } in the middle module foo/main.tfis my working, but ugly hack around this problem.

@arohter's solution works well when creating new resources, however if you later remove a module from your configuration and run terraform apply, the provider is lost and those resources can't be destroyed.

In my case the AWS environment variable fix doesn't work. I'm using S3 remote states using buckets in a different AWS account than the one I'm applying against (remote state uses the default AWS creds). When the provider is "lost" it falls back to the default (environment variable or the [default] block in ~/.aws/credentials) and tries to destroy resources in that account, giving me the following error:

 InvalidParameterException: Identifier is for account-id-B. Your accountId is account-id-A

Also this is intermittent... the provider is lost only some of the time. I'm not real familiar with how the Terraform state tree works, but this feels like a race condition. It's as if the provider is a global reference and is set and lost as the apply routine is traversing the tree.

I'm still experiencing this issue on terraform 0.8.6. Is anyone else having the same problem?

I seem to still be having this issue on 0.11.8

For people that seems to face this : I was using both named profiles and environment variables.

The latter are used before the profiles causing the confusion.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

carl-youngblood picture carl-youngblood  ยท  3Comments

rkulagowski picture rkulagowski  ยท  3Comments

larstobi picture larstobi  ยท  3Comments

rnowosielski picture rnowosielski  ยท  3Comments

c4milo picture c4milo  ยท  3Comments