Terraform v0.8.8
everything
I'll provide these if necessary.
https://gist.github.com/FlorinAndrei/aa4cb6a677da1f8b0ffa92b10e5d06c8
Happens intermittently. The number of errors occurred varies from one run to the next. Typically about half a dozen. Once in a blue moon there are no errors and the plan command actually works.
The only change I could think of was:
I have a module called static_cluster where I've used aws_instance without defining any block devices - just relying on defaults. Now I've added root_block_device but for now the settings in there are also the defaults, so no change should happen. I've tested it on another environment, and indeed no changes were predicted by terraform plan.
But now going back to the env I'm working on, I get this fluctuating error all the time.
Not sure if this is indeed the cause. Perhaps it's unrelated.
terraform plan -destroy works just fine on same environment.
The error disappears if I add the -parallelism=4 option to terraform plan. Looks like there's some kind of rate limiting or parallelism limits now in place at AWS.
I was able to do a successful complete run (build from scratch) with -parallelism=4 added to both terraform plan (to kill the bug) and terraform apply (no idea if necessary here, but seemed wise to use it).
Potentially being rate limited on the auth itself? I'm not really sure here if you could provide [minimal] configs to reproduce this it'd be helpful to run it ourselves and see.
Here is the TF env with the modules:
no_valid_credential_sources.tar.gz
The sensitive info has been redacted.
Things I should probably mention:
~/.aws/* and are the same ones used by the AWS CLI, by Boto, etcus-west-2 (Oregon), which is where the test infrastructure is created; all my AWS API connections are directed at us-west-2AdministratorAccess policy, so as to remove any suspicion from IAM policiesI am running into the same intermittent error with 0.9.1. Also disappears if I use -parallelism=4 option.
馃憤 we've been fighting this for a while. https://github.com/hashicorp/terraform/issues/6222 is related
Actually, may be different if static credentials are being used here. We rely on the EC2 metadata endpoint to supply the credentials.
I haven't seen this issue much recently, but it was intermittent to begin with.
I see it everytime I try to add a "aws_route53_record" resource...it's maddening.
If I remove the "aws_route53_record" resource, the plan runs, if I add it back in, I get the "No valid credential sources found for AWS Provider" error.
My profile has AdministratorAccess as well, so nothing hinky there.
Had this issue today when there was spelling mistakes in the AWS credential file, fixing that made the error go away. The error message confused me so that might need to be more specific.
This issue has been migrated to terraform-providers/terraform-provider-aws#590 as part of separating the providers into their own repositories. Please post any further comments over there! Thanks.