Hi there,
Terraform v0.11.2
+ provider.aws v1.10.0
Everything
provider "aws" {
region = "ap-southeast-2"
endpoints {
dynamodb = "http://localhost:4569/"
}
}
resource "aws_dynamodb_table" "poc_table" {
name = "${terraform.workspace}.ProofOfConcept"
read_capacity = 5
write_capacity = 5
hash_key = "UserId"
attribute {
name = "UserId"
type = "S"
}
}
OR
provider "aws" {
access_key = "123"
secret_key = "123"
region = "ap-southeast-2"
endpoints {
dynamodb = "http://localhost:4569/"
}
}
resource "aws_dynamodb_table" "poc_table" {
name = "${terraform.workspace}.ProofOfConcept"
read_capacity = 5
write_capacity = 5
hash_key = "UserId"
attribute {
name = "UserId"
type = "S"
}
}
https://gist.github.com/j-groeneveld/b0af35f4f7ab518bbb8cd413b3281e3e
OR
https://gist.github.com/j-groeneveld/43aa1df229e50ef8e06e64028351f2a9
Terraform ignores missing credentials and uses the local Dynamo DB endpoint http://localhost:4569.
Terraform complains about invalid or incorrect credentials.
terraform planAre there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
DescribeTimeToLive and ListTagsOfResource which should be fixed with recent localstack PR #599.I have seen multiple examples of missing or bogus credentials for the aws provider. Perhaps credentials are now required in a new Terraform release but I haven't seen anything explicitly addressing this change in requirement.
Using this with localstack as well I don't seem to hit this error
Terraform v0.11.3
+ provider.aws v1.10.0
Not to say that it doesn't exist, but I'm very cautious now after reading this issue.
This issue has nothing to do with localstack per se. The issue arrises for me whether I am using localstack or not -- but for the sake of this issue we should disregard localstack altogether.
Do you mind sharing your Terraform config config.tf and output from terraform plan?
provider "aws" {
region = "us-west-2"
endpoints {
dynamodb = "http://localhost:4569"
}
}
resource "aws_dynamodb_table" "mytable" {
name = "mytable"
read_capacity = 5
write_capacity = 5
hash_key = "Id"
range_key = "Timestamp"
attribute {
name = "Id"
type = "S"
}
attribute {
name = "Timestamp"
type = "S"
}
}
13:05 $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ aws_dynamodb_table.mytable
id: <computed>
arn: <computed>
attribute.#: "2"
attribute.3292831115.name: "Id"
attribute.3292831115.type: "S"
attribute.423918437.name: "Timestamp"
attribute.423918437.type: "S"
hash_key: "Id"
name: "mytable"
range_key: "Timestamp"
read_capacity: "5"
stream_arn: <computed>
stream_label: <computed>
stream_view_type: <computed>
write_capacity: "5"
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Hmm odd. After replicating your exact configuration in a new dir and running a fresh terraform init:
13:25:46 ▶ terraform --version
Terraform v0.11.3
+ provider.aws v1.10.0
I still get the following error:
13:25:41 ▶ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Error running plan: 1 error(s) occurred:
* provider.aws: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
I'm not sure what is going on here.
@aventurella would you mind sharing the OS you are working on? Here is mine:
13:35:11 ▶ echo $OSTYPE
darwin16
Next step is probably to find where in the source code this error is being thrown -- I will try do this when I have a spare moment in the next week.
Appreciate the help so far :-)
I don't know if this will help.. but I do have a ~/.aws dir from running aws configure with the awscli installed
Great.
Now can you confirm that the credentials under [default] section in ~/.aws/credentials are in fact valid credentials?
cat ~/.aws/credentials
Should yield something that looks like the following
...
[default]
aws_access_key_id = redacted
aws_secret_access_key = redacted
...
My main question here is to see whether dummy credentials work or whether in fact you need a valid AWS IAM for this to work.
Bingo...
I just changed [default] -> [default-1] so it wouldn't find it.. Got your error.
```15:15 $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
Error: Error refreshing state: 1 error(s) occurred:
Additionally, if I suffix the credentials under [default] with 111 or anything to make them be not the right thing, it will also fail.
So it would seem terraform, when given a new url for an AWS resource, still does some preliminary check with the credentials. That would make sense in terms of, you may only have done 1 new url for an AWS service. I don't know why anyone would only do one. Nevermind, I do. I had a client once that had their own S3 implementation, but still could use AWS for other things.
Thanks @aventurella
For everyone else interested in debugging this I have created a test repo with the above setup -- https://github.com/j-groeneveld/terraform-providers-3608.
@j-groeneveld I was just able to get this working by including skip_credentials_validation = true
variable "local_dynamo" {
description = "Endpoint for local dynamodb"
default = "http://localhost:4569"
}
variable "local_s3" {
description = "Endpoint for local s3"
default = "http://localhost:4572"
}
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
endpoints {
dynamodb = "${var.local_dynamo}"
s3 = "${var.local_s3}"
}
}
@iadknet Terraform doesn't appear to be honoring the skip_credential_validation flag. I've opened an Issue with details: https://github.com/hashicorp/terraform/issues/18696
@Ghazgkull Ah, I realize now why it was working for me.
I thought it was because I introduced the skip_credentials_validation flag, but I also happened to switch to using the S3 backend at about the same time, which requires valid credentials. D'oh.
You're right, if I remove the s3 backend, and do not inject valid credentials, then skip_credentials_validation is not honored.
Yeah, we went through the same mystery phase of "Why is it working for some people but not others???"
I have been able to work around this by also setting:
skip_requesting_account_id=true
I am not sure if dynamodb-local is supposed to work without sending credentials. I can't get it to work with the aws cli either.
$ aws dynamodb list-tables --no-sign-request --endpoint-url http://localhost:8000
An error occurred (MissingAuthenticationToken) when calling the ListTables operation: Request must contain either a valid (registered) AWS access key ID or X.509 certificate.
And I think that error is coming from dynamodb-local, not the cli.
Also, removing the credentials from ~/.aws/credentials gives me this error (obviously from the cli):
Unable to locate credentials. You can configure credentials by running "aws configure".
Further, accessing dynamodb-local with different credentials (different AWS account ids I am guessing) gives me different "namespaces" of sorts. I can only list my dynamodb-local table using the aws cli if I use the same --profile that I configured terraform to use.
This issue should probably be resolved as a duplicate of https://github.com/terraform-providers/terraform-provider-aws/issues/5584
That issue explains a functional workaround in the latest version of the terraform aws provider.
Resolved as a duplicate of #5584
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
@Ghazgkull Ah, I realize now why it was working for me.
I thought it was because I introduced the
skip_credentials_validationflag, but I also happened to switch to using the S3 backend at about the same time, which requires valid credentials. D'oh.You're right, if I remove the s3 backend, and do not inject valid credentials, then
skip_credentials_validationis not honored.