I had an existing aws_cloudfront_distribution resource without logging_config configuration.
The when adding a new aws_s3_bucket and a referencing logging_config block, the Bucket was created but not ready when Terraform tried to configure it for the CloudFront distribution:
aws_s3_bucket.logs: Creating...
acl: "" => "private"
arn: "" => "<computed>"
bucket: "" => "foo-logs"
force_destroy: "" => "0"
hosted_zone_id: "" => "<computed>"
region: "" => "<computed>"
website_domain: "" => "<computed>"
website_endpoint: "" => "<computed>"
aws_s3_bucket.logs: Creation complete
aws_cloudfront_distribution.app: Modifying...
logging_config.#: "0" => "1"
logging_config.1955940039.bucket: "" => "foo-logs"
logging_config.1955940039.include_cookies: "" => "0"
logging_config.1955940039.prefix: "" => "cloudfront"
Error applying plan:
1 error(s) occurred:
* aws_cloudfront_distribution.app: InvalidArgument: The parameter Logging Bucket does not refer to a valid S3 bucket.
status code: 400, request id: <xxx>
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
This is a familiar case with AWS and it's caching, but the issue is that Terraform anyway stored the new but failed configuration to the state file and does not see any changes in the future plan or apply calls. So it does not refresh the state from AWS.
Pinging @vancluever and @jrnt30 as you might to have all the context in your working memory after a recent fix on another issue. ;)
Oh, the bucket must be specified in "_name_.s3.amazonaws.com" format. Should have read the docs. =)
But the issue still remains, if e.g. the configuration is changed via web console.
Wanted to run this by @vancluever to make sure I'm not off base here, but the branch I put out there addresses the behavior you described. Just wanted to validate this addresses the spirit of what he had originally done with the check around Enabled and there isn't an edge case I am not aware of (don't use CloudFront much myself)
Hey @jrnt30, just commented on the code. I think this is okay, the only time that logging is enabled is when logging_config is defined, so if effectively unsetting it to force a diff works, then great!
The one caveat - I'm not too sure if using nil the best idea here or if you would want to use schema.NewSet(loggingConfigHash, []interface{}{}) instead (an empty set). Maybe test it with that, to see if the behaviour is the same, as it looks like that's what in other examples (see https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/cloudfront_distribution_configuration_structure.go#L818, for example).
Yeah, it's interesting in that you can't directly range over nil but in this case the actual of items in schema.NewSet is an empty array. I'll change it because it does look cleaner and a bit less confusing (and also dig into the spec around pass by value with nil).
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Oh, the
bucketmust be specified in "_name_.s3.amazonaws.com" format. Should have read the docs. =)But the issue still remains, if e.g. the configuration is changed via web console.