Terraform v0.12.7
* provider.aws: version = "~> 2.27"
resource "aws_cloudfront_distribution" "mysite_project_cloudfront" {
origin_group {
origin_id = "mysite_project"
failover_criteria {
status_codes = [403, 500, 502, 503, 504]
}
member {
origin_id = "mysite_project_alb"
}
member {
origin_id = "mysite_project_failover"
}
}
origin {
domain_name = "${data.terraform_remote_state.common.outputs.alb_mysite_prod}"
origin_id = "mysite_project_alb"
custom_origin_config {
http_port = "80"
https_port = "443"
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.1", "TLSv1.2"]
origin_keepalive_timeout = 30
origin_read_timeout = 5
}
}
origin {
domain_name = aws_s3_bucket.mysite_project_staging.bucket_domain_name
origin_id = "mysite_project_failover"
origin_path = "/static_html"
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.mysite_project_origin_access_identity.cloudfront_access_identity_path
}
}
aliases = ["www.mysite.com", "beta.mysite.com"]
restrictions {
geo_restriction {
locations = []
restriction_type = "none"
}
}
price_class = "PriceClass_100"
comment = "mysite Project Cloud Front Distribution"
http_version = "http2"
is_ipv6_enabled = true
enabled = true
logging_config {
include_cookies = false
bucket = aws_s3_bucket.mysite_project_cloudfront_logs.bucket_domain_name
prefix = "mysite-project-cloudfront"
}
viewer_certificate {
acm_certificate_arn = "arn:aws:acm:us-east-1:XXXXXXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXX"
cloudfront_default_certificate = false
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.1_2016"
}
ordered_cache_behavior {
path_pattern = "*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "mysite_project"
smooth_streaming = false
trusted_signers = []
forwarded_values {
query_string = true
query_string_cache_keys = []
headers = ["Host", "Origin", "Authorization", "Access-Control-Request-Headers", "Access-Control-Request-Method", "Referer"]
cookies {
whitelisted_names = null
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "mysite_project"
smooth_streaming = false
trusted_signers = []
forwarded_values {
query_string = true
query_string_cache_keys = []
headers = ["Host", "Origin", "Authorization", "Access-Control-Request-Headers", "Access-Control-Request-Method", "Referer"]
cookies {
forward = "none"
whitelisted_names = []
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
tags = {
Environment = "staging"
}
}
Once the terraform apply for the creation of a aws_cloudfront_distribution has been completed and is in Deployed status, the terraform apply command should return a generic message stating that the apply has been completed.
Terraform apply of aws_cloudfront_distribution resource keeps running even when the actual resource has been created, which is evident by confirming it on the AWS console or by using the aws cli. An example of this is below:
aws_cloudfront_distribution.ctacorp_project_cloudfront: Still creating... [54m21s elapsed]
^ The output of terraform apply kept stating that the resource is being created. However the resource had already been created out successfully within the first 20 minutes and was in Deployed status.
A bug seems to be causing the apply of terraform aws_cloudfront_distribution resource from returning with apply success even when the resource is created successfully.
terraform applyAs a workaround, to get the resource imported into the remote state for further modifications to the cloudfront distribution resource - I used the terraform import.
terrafrom import aws_cloudfront_distribution.mysite_project_cloudfront ABCD1EFGH2IJK3
While, I was able to get the resource imported to work with it in future - The problem around terraform apply of aws_cloudfront_distribution where it constantly keeps running even when deployed out successfully still needs to be remediated for any future aws cloudfront distribution creations.
I'm facing the same issue. Is there any kind of fix for it besides the workaround posted by @darkwizard242?
Most helpful comment
As a workaround, to get the resource imported into the remote state for further modifications to the cloudfront distribution resource - I used the
terraform import.While, I was able to get the resource imported to work with it in future - The problem around terraform apply of
aws_cloudfront_distributionwhere it constantly keeps running even when deployed out successfully still needs to be remediated for any future aws cloudfront distribution creations.