Terraform 0.12.26
AWS Provider v3.5.0
terraform {
required_version = "0.12.26"
}
provider "aws" {
version = "3.5.0"
profile = "canva-network-cn"
region = "cn-north-1"
skip_metadata_api_check = true
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket-fdsajfkdslfjlk"
acl = "private"
}
https://gist.github.com/michaelfoley1/06d649284ac90fe467cce2defe839ed9
Terraform should create the bucket and exit gracefully.
Terraform creates the bucket and errors out with the error message:
Error: error getting S3 Bucket location: Unauthorized: Unauthorized
terraform apply
This only affects AWS account in the Chinese partiion that do not have ICP licenses associated to them.
If a Chinese account does have an ICP licence unauthenticated HEAD requests receive a 403 response which is gracefully handled.
It an Chinese account does not have an ICP license unauthenticated HEAD requests receive a 401 response which is not gracefully handled and errors out.
I'm assuming this is recent new behaviour? Getting this when updating some already existing buckets as well.
I'm assuming this is recent new behaviour? Getting this when updating some already existing buckets as well.
The AWS change was made ~September 30,
My temp fix was to remove this LoC so HEAD requests are authenticated while I figure out the "correct" fix:
https://github.com/aws/aws-sdk-go/blob/master/service/s3/s3manager/bucket_region.go#L110
It happens also in the terraform plan phase as well. As soon as terraform tries to check/evaluate the actual state of the S3 logging bucket, it raises the same error denoted above.
This is code snippet of our terraform module:
`
resource "aws_s3_bucket" "apr_s3_bucket" {
bucket = "${var.identifier}"
acl = "private"
provider = aws.aws_s3_provider
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
dynamic "logging" {
for_each = var.aws_s3_logging_enabled ? ["${aws_s3_bucket.app_log_bucket.id}"] : []
content {
target_bucket = "${aws_s3_bucket.app_log_bucket.id}"
target_prefix = "${var.identifier}"
}
}
...
resource "aws_s3_bucket" "app_log_bucket" {
bucket = "access-log-${var.identifier}"
acl = "log-delivery-write"
provider = aws.aws_s3_provider
`
I'm using this change to work around the issue: https://github.com/autonomic-ai/terraform-provider-aws-1/commit/64e2a4f3de2cda3503650e51959bc864d3789f70
If that seems like an appropriate way to fix this I can clean it up and send a PR
Hi @ebabani !
How can we test your fix please ?
Is it possible to override the AWS provider in terraform declaration to point to your github source... ?
You have to build the provider and override the AWS provider terraform uses locally. For more details see https://www.terraform.io/docs/extend/how-terraform-works.html#discovery
@ebabani Thanks, I did that and your fix went well !
Hello ! Do you have an idea of a potential date to merge this fix please ?
Might be related to https://github.com/terraform-providers/terraform-provider-aws/issues/15659
I built the code from this PR and deployed locally overriding official AWS provider and it worked fine.
The fix for this has been merged and will release with version 3.16.0 of the Terraform AWS Provider, later this week. Thank you to @ebabani for the implementation. 馃憤
This has been released in version 3.16.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
Most helpful comment
I'm using this change to work around the issue: https://github.com/autonomic-ai/terraform-provider-aws-1/commit/64e2a4f3de2cda3503650e51959bc864d3789f70
If that seems like an appropriate way to fix this I can clean it up and send a PR