terraform -v
Terraform v0.12.23
+ provider.aws v2.51.0
+ provider.kubernetes v1.11.1
+ provider.template v2.1.2
to
terraform -v
Terraform v0.12.23
+ provider.aws v2.52.0
+ provider.kubernetes v1.11.1
+ provider.template v2.1.2
Relevant bit: provider.aws v2.51.0 โ 2.52.0
.
data "aws_iam_policy_document" "XXX" {
statement {
actions = ...
resources = [
...
]
}
}
resource "aws_iam_policy" "XXX" {
name = "XXX"
path = "/"
policy = data.aws_iam_policy_document.XXX.json
}
resource "aws_s3_bucket" "XXX" {
bucket = ...
acl = "private"
region = "us-east-1"
tags = {
Name = ...
}
}
No changes. Infrastructure is up-to-date.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
<= read (data resources)
Terraform will perform the following actions:
# module.XXX.data.aws_iam_policy_document.XXX will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "deployment" {
+ id = (known after apply)
+ json = (known after apply)
+ statement {
+ actions = [
+ ...
]
+ resources = [
+ ...
]
}
}
# module.services.aws_iam_policy.deployment will be updated in-place
~ resource "aws_iam_policy" "XXX" {
...
~ policy = jsonencode(
{
...
}
) -> (known after apply)
}
# module.XXX.aws_s3_bucket.XXX will be updated in-place
~ resource "aws_s3_bucket" "XXX" {
...
- grant {
...
}
...
}
Plan: 0 to add, 7 to change, 0 to destroy.
------------------------------------------------------------------------
I'm seeing this too. Even adding the new configuration, it still shows a change, even when there isn't. For example.
- grant {
- permissions = [
- "READ_ACP",
- "WRITE",
] -> null
- type = "Group" -> null
- uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" -> null
}
- grant {
- id = "REDACTED1" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
}
+ grant {
+ id = "REDACTED1"
+ permissions = [
+ "FULL_CONTROL",
]
+ type = "CanonicalUser"
}
+ grant {
+ permissions = [
+ "READ_ACP",
+ "WRITE",
]
+ type = "Group"
+ uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
Actually it works when you have all the rules in terraform. I hadn't done the last one yet.
Maybe due to the merge of https://github.com/terraform-providers/terraform-provider-aws/pull/3728 - S3 bucket ACL grants are now managed by Terraform.
@ewbankkit I think I've seen in the past where terraform _can_ manage something but hasn't actually been managing it where the provider will silently ignore the diff but maybe that's a false memory.
Still, this kind of change feels odd for a minor version bump since I would expect those to be non-breaking.
Does this need a state migration adding?
It's the fact that terraform wants to remove a grant which is created by default whenever a bucket is created that I find confusing - IMO if the grants block isn't specified then it should leave the default grant alone.
When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource.
https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
Generated plan
- grant {
- id = "<creatorid>" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
}
I've also seen this today and while adding the grant to our code to mitigate the issue I also noticed that the documentation of the grant {}
block that was added with change #3728 is incorrect:
https://github.com/terraform-providers/terraform-provider-aws/pull/3728/files#diff-7f5ed2626ccd023dd9d0f679c2526b6fR323
https://github.com/terraform-providers/terraform-provider-aws/pull/3728/files#diff-7f5ed2626ccd023dd9d0f679c2526b6fR328
These lines in the provided example code use permission
instead of permissions
and also FULL_ACCESS
where the actual correct value should be FULL_CONTROL
(The former fails to plan as an invalid value).
https://github.com/terraform-providers/terraform-provider-aws/pull/3728/files#diff-7f5ed2626ccd023dd9d0f679c2526b6fR488
This line also lists FULL_ACCESS
as a valid option instead of FULL_CONTROL
.
I reproduced it on 2.52 version solo. That is not a migration bug, that some bug in grant
state calculation.
To reproduce:
resource "aws_s3_bucket" "XXX" {
bucket = ...
}
resource "aws_s3_bucket" "XXX" {
bucket = ...
grant {
id = "${data.aws_canonical_user_id.current.id}"
type = "CanonicalUser"
permissions = ["FULL_CONTROL"]
}
grant {
.....<your changes>
}
}
If you reproduce it 100% matched including ordering - there will be no diff
If not - you'll get full recreation. After recreation that will be ok.
I see that logic of creation grant is correct and it looks like a sorting issue inside terraform state. The ordering issue looks strange because storage of grant block made on hashes, not lists. But that looks like most true story.
Hi,
I'm also seeing an issue with how grant
is calculated for the plan. Tested on 2.63.
Create a bucket with grant:
resource "aws_s3_bucket" "my-example-terraform-grant" {
bucket = "my-example-terraform-grant"
grant {
id = data.aws_canonical_user_id.current_user.id
type = "CanonicalUser"
permissions = ["FULL_CONTROL"]
}
}
terraform apply
.terraform plan
yields an in-place update to add the grant.
Output of
terraform plan
# aws_s3_bucket.my-example-terraform-grant will be updated in-place
~ resource "aws_s3_bucket" "my-example-terraform-grant" {
acl = "private"
arn = "arn:aws:s3:::my-example-terraform-grant"
bucket = "my-example-terraform-grant"
bucket_domain_name = "my-example-terraform-grant.s3.amazonaws.com"
bucket_regional_domain_name = "my-example-terraform-grant.s3.eu-west-1.amazonaws.com"
force_destroy = false
hosted_zone_id = "XXXXXXXXXXXXXX"
id = "my-example-terraform-grant"
region = "eu-west-1"
request_payer = "BucketOwner"
tags = {}
+ grant {
+ id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+ permissions = [
+ "FULL_CONTROL",
]
+ type = "CanonicalUser"
}
versioning {
enabled = false
mfa_delete = false
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
At this point I could terraform apply
forever, it will always see a change.
This is the bucket in the backend state
{
"mode": "managed",
"type": "aws_s3_bucket",
"name": "my-example-terraform-grant",
"provider": "provider.aws",
"instances": [
{
"schema_version": 0,
"attributes": {
"acceleration_status": "",
"acl": "private",
"arn": "arn:aws:s3:::my-example-terraform-grant",
"bucket": "my-example-terraform-grant",
"bucket_domain_name": "my-example-terraform-grant.s3.amazonaws.com",
"bucket_prefix": null,
"bucket_regional_domain_name": "my-example-terraform-grant.s3.eu-west-1.amazonaws.com",
"cors_rule": [],
"force_destroy": false,
"grant": [
{
"id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"permissions": [
"FULL_CONTROL"
],
"type": "CanonicalUser",
"uri": ""
}
],
"hosted_zone_id": "XXXXXXXXXXXXXX",
"id": "my-example-terraform-grant",
"lifecycle_rule": [],
"logging": [],
"object_lock_configuration": [],
"policy": null,
"region": "eu-west-1",
"replication_configuration": [],
"request_payer": "BucketOwner",
"server_side_encryption_configuration": [],
"tags": {},
"versioning": [
{
"enabled": false,
"mfa_delete": false
}
],
"website": [],
"website_domain": null,
"website_endpoint": null
},
"private": "bnVsbA=="
}
]
}
still having the same issue with provider.aws ~> 2.69 and terraform 0.12.25
[terragrunt] 2020/07/07 15:27:01 Running command: terraform providers
.
โโโ provider.aws ~> 2.69
โโโ module.bucket_access_policies
โ โโโ provider.aws (inherited)
โโโ module.bucket_policies
โ โโโ provider.aws (inherited)
โโโ module.s3_bucket
โ โโโ provider.aws (inherited)
โโโ module.system_users
โโโ provider.aws (inherited)
I've tried to workaround this issue by having an explicit grant
declaration like @mrliptontea above in https://github.com/terraform-providers/terraform-provider-aws/issues/12332#issuecomment-634754615, but even if I pin my terraform-aws-provider
version to 2.53 (lowest I can go in my project) I still get prompted to create a new policy every time I run terraform plan
.
provider "aws" {
region = var.aws_region
version = "<= 2.53"
so it sounds like perhaps an upstream change is causing issues.
# versions for reference
$ terraform -v
Terraform v0.12.28
+ provider.archive v1.3.0
+ provider.aws v2.53.0
+ provider.template v2.1.2
Most helpful comment
Hi,
I'm also seeing an issue with how
grant
is calculated for the plan. Tested on 2.63.Create a bucket with grant:
terraform apply
.terraform plan
yields an in-place update to add the grant.Output of
terraform plan
At this point I could
terraform apply
forever, it will always see a change.This is the bucket in the backend state