_This issue was originally opened by @sandyfox as hashicorp/terraform#18188. It was migrated here as a result of the provider split. The original body of the issue is below._
RUNNING TERRAFORM 0.11.7
...
cloudfront:
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = "${aws_s3_bucket.s3bucket.website_endpoint}"
s3: standard s3 condifg with s3 redirect request
website {
redirect_all_requests_to = "https://abcd.com"
}
...
TF should accept the s3 website_endpoint in domain_name as its working fine when I update the cloudfront origins config from AWS console manually.
aws_cloudfront_distribution.s3_distribution: InvalidArgument: The parameter Origin DomainName does not refer to a valid S3 bucket.
create s3 bucket by configuring s3 static website by checking "redirect requests" on and provide Target bucket or domain with protocol.
configure cloudfront by setting website endpoint as origin.
run terraform apply.
We need this feature for setting up the s3 redirects.
When you configure your CloudFront distribution, for the origin, enter the Amazon S3 static website hosting endpoint for your bucket. This value appears in the Amazon S3 console, on the Properties page under Static Website Hosting. For example:
http://bucket-name.s3-website-us-west-2.amazonaws.com
When you specify the bucket name in this format as your origin, you can use Amazon S3 redirects and Amazon S3 custom error documents.
Any update on this issue ?
Same issue is holding us back at the moment. Right now I'm manually overriding my domain_names after every terraform apply which makes me feel very unhappy.
For us, defining a custom_origin_config
block in the origin
of the aws_cloudfront distribution
resource block helped.
To further clarify @sivuosa 's comment about custom origin config...this doesn't work:
origin {
domain_name = "${var.bucketname_s3_website_endpoint}"
origin_id = "${local.origin_id_bucketname}"
}
this does:
origin {
domain_name = "${var.bucketname_s3_domain_name}"
origin_id = "${local.origin_id_bucketname}"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
Which was, to me, not intuitive. The docs state there is custom_origin_config
and s3_origin_config
, i.e. if you are using an S3 resource vs. something else (like ALB), you use the related config block - but in the s3_origin_config
block there is nothing beyond an origin access ID to set.
Anyways, glad I have a workaround!
Just to clarify a bit further if anyone else hits this and cannot solve with above workaround, "${var.bucketname_s3_domain_name}" should be the bucket website_endpoint value and not bucket_domain_name as I implied from the example code. Thanks for the workaround and clarification @sivuosa and @christrotter
Waiting for this issue to be fixed
Hey guys. I took a look into the sources and tried everything that Terraform does and what we try to achieve directly using the AWS API.
This is no limitation or bug in Terraform. The error message also does not come from Terraform but is directly passed from the AWS API instead. Run terraform with the environment variable TF_LOG=DEBUG
set to view the API communications. The fact that the error only occurs when applying, but not when planning, also indicates that the error comes from AWS.
I read a lot about Cloudfront's capabilities today and AWS states very clearly that the Cloudfront S3 Origin is only available for access to the bucket itself, and NOT for the static website endpoint of a bucket. To use the S3 website endpoint with Cloudfront, you absolutely must define it as a custom origin.
Custom origins don't pass s3 origin identities, because they leave the S3-Cloudfront context and work like any other website origin in the web.
If you want to use the S3 website endpoint and put it behind a Cloudfront distribution simply set it as custom origin, while using plain HTTP to access the origin. Add a custom header that the S3 Policiy conditionals understand ("User-Agent", for example). Set the value of this header to something secret only your Cloudfront and S3 bucket resources know. Then allow public access to the bucket and add a condition that this header has to set to this specific value.
Why does it work when you change it manually in the console? Because Cloudfront transforms your origin into a custom origin, also leading to losing the features of the s3 origin settings.
variable "cloudfront-authentication-user-agent" {
default = "V3ryS3cretString"
}
resource "aws_cloudfront_distribution" "distribution" {
origin {
domain_name = "${aws_s3_bucket.website-bucket.website_endpoint}"
origin_id = "${aws_s3_bucket.website-bucket.website_endpoint}"
custom_origin_config {
http_port = "80"
https_port = "443"
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
custom_header {
name = "User-Agent"
value = "${var.cloudfront-authentication-user-agent}"
}
}
[...]
md5-baf8ddf82eb42b64a6cd2209bc2c43cd
```hcl
resource "aws_s3_bucket_policy" "website-bucket-policy" {
bucket = "${aws_s3_bucket.website-bucket.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${var.website-bucket-name}.example.com/*",
"Condition": {
"StringEquals": {
"aws:UserAgent": "${var.cloudfront-authentication-user-agent}"
}
}
}
]
}
POLICY
}
Security note: This allows everyone with the correctly crafted User-Agent
header to still directly access files in your bucket without using your CDN.
As this is not a Terraform issue, I would rather close this issue, if there are no more questions asked.
Cheers!
Thanks for that detailed breakdown. I feel an example covering this use case should be added to the docs.
Absolutely. I'll craft minimal examples and add these to the docs in the next few weeks.
Thanks for publishing the example. I've tested and it works for both a static web site and a redirect S3 bucket. From a security point of view I'd recommend removing
acl = "public-read"
This is not necessary as the bucket policy allows access from cloudfront, and it's good practice to keep S3 buckets private where possible.
I've tested successfully using private S3 buckets by removing the line.
This is not necessary
You are absolutely right. The Pull Request I opened also omits the public-read
part. I should have backported this back into the example above. Thanks for making it clear and of course thanks for testing the example! Let's hope that example will make it into the official docs. :+1:
Looks like the issue here is that Terraform assume the S3 Website is an S3 bucket when it should be handled as a custom origin. Another work-around is to add an explicit "custom_origin_config" section in the origin to avoid Terraform sending it to AWS as a S3 origin.
By the way, since index_document from the S3 website are only available on the S3 website endpoints, there are use cases where it makes sense to use the S3 website endpoint behind a CloudFront distribution.
I also had to configure default_root_object = "index.html"
in order to make it all work without typing index.html
in the browser (otherwise I was getting an AccessDenied
error when trying to access it by the bare domain name)
Most helpful comment
To further clarify @sivuosa 's comment about custom origin config...this doesn't work:
this does:
Which was, to me, not intuitive. The docs state there is
custom_origin_config
ands3_origin_config
, i.e. if you are using an S3 resource vs. something else (like ALB), you use the related config block - but in thes3_origin_config
block there is nothing beyond an origin access ID to set.Anyways, glad I have a workaround!