$ terraform -v
Terraform v0.11.11
+ provider.aws v1.59.0
provider "aws" {
version = "= 1.59.0"
region = "us-east-1"
}
resource "aws_s3_bucket" "b" {
bucket = "minamijoyo-public-access-block-test"
}
resource "aws_s3_bucket_policy" "b" {
bucket = "${aws_s3_bucket.b.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::minamijoyo-public-access-block-test/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "127.0.0.1/32"}
}
}
]
}
POLICY
}
resource "aws_s3_bucket_public_access_block" "example" {
bucket = "${aws_s3_bucket.b.id}"
block_public_acls = true
block_public_policy = true
}
2392 2019-02-21T11:49:50.073+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: -----------------------------------------------------↲
2393 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: 2019/02/21 11:49:50 [DEBUG] [aws-sdk-go] DEBUG: Response s3/PutBucketPolicy Details:↲
2394 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: ---[ RESPONSE ]--------------------------------------↲
2395 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: HTTP/1.1 409 Conflict↲
2396 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: Connection: close↲
2397 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: Transfer-Encoding: chunked↲
2398 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: Content-Type: application/xml↲
2399 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: Date: Thu, 21 Feb 2019 02:49:49 GMT↲
2400 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: Server: AmazonS3↲
2401 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: X-Amz-Id-2: I2Fd71SFEnfx9m7SOjcCaF6G+ZdyDYMMk/3qzSk7ZhXZ9ERhAyVGzlKhtFYd3TRxwg5yHVVm+i0=↲
2402 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: X-Amz-Request-Id: B663B2CF1942B6E2↲
2403 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4:-↲
2404 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4:-↲
2405 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: -----------------------------------------------------↲
2406 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: 2019/02/21 11:49:50 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>↲
2407 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: <Error><Code>OperationAborted</Code><Message>A conflicting conditional operation is currently in progress against this resource. Please try again.</ Message><RequestId>B663B2CF1942B6E2</RequestId><HostId>I2Fd71SFEnfx9m7SOjcCaF6G+ZdyDYMMk/3qzSk7ZhXZ9ERhAyVGzlKhtFYd3TRxwg5yHVVm+i0=</HostId></Error>↲
2408 2019-02-21T11:49:50.920+0900 [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4: 2019/02/21 11:49:50 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/PutBucketPolicy failed, not retrying, error OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.↲
No panic
No error
Got an error
$ terraform apply
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket_policy.b: 1 error(s) occurred:
* aws_s3_bucket_policy.b: Error putting S3 policy: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
status code: 409, request id: B663B2CF1942B6E2, host id: I2Fd71SFEnfx9m7SOjcCaF6G+ZdyDYMMk/3qzSk7ZhXZ9ERhAyVGzlKhtFYd3TRxwg5yHVVm+i0=
terraform apply.
Success and failure depend on timing.
I tried it a couple of times, but in my environment, there are fewer cases without the error.
Although the types of resources are different, calling S3 API in parallel to the same bucket may cause this error.
I am also having issues with this bug:
resource "aws_s3_bucket" "this" {
bucket_prefix = "xxxx"
acl = "private"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_s3_bucket_public_access_block" "this" {
bucket = "${aws_s3_bucket.this.bucket}"
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
data "aws_iam_role" "policy_identifiers" {
name = "xxxx"
}
data "aws_iam_policy_document" "s3_bucket_policy_policy" {
version = "2012-10-17"
statement {
effect = "Allow"
actions = ["s3:*"]
resources = [
"${aws_s3_bucket.this.arn}/*",
"${aws_s3_bucket.this.arn}",
]
principals {
type = "AWS"
identifiers = ["${data.aws_iam_role.policy_identifiers.arn}"]
}
}
}
resource "aws_s3_bucket_policy" "this" {
bucket = "${aws_s3_bucket.this.bucket}"
policy = "${data.aws_iam_policy_document.s3_bucket_policy_policy.json}"
}
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket_policy.this: 1 error(s) occurred:
* aws_s3_bucket_policy.this: Error putting S3 policy: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
status code: 409, request id: xxxxxxxxxxxxxxxx, host id: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Using depends_on
is a good way to force the public access block and policy to be applied one-by-one instead of concurrently:
resource "aws_s3_bucket_policy" "this" {
depends_on = ["aws_s3_bucket_public_access_block.this"]
This worked for me.
I think that depends_on
would work as a workaround, but it does not actually depend on it, we should implement an appropriate error handling. Since similar problems can occur with many s3 related resources, I'm not sure how do we handle it is the best.
Either that or sort out planned API calls to S3 so that they happen in sequence.
Some sort ordering seems needed, a related issue is that if you have AWS Guardduty enabled and you terraform destroy an s3 bucket you will get a security alert that the "Block public access policies" have been removed because they are removed before the bucket is removed rather than just destroying the bucket first.
~a problem with the workaround is that it leads to the S3 bucket policy wanting to 'change' every time even though it hasn't changed. As soon as I remove the depends_upon
, then it reports that there are no changes to apply.~
I was doing it wrong, I had added the depends_upon
to the policy document rather than the policy resource.
Still experiencing this as of v0.12.28.
IMO documenting the use of depends_on
to serialize creation would be sufficient.
It seems this is also an issue when managing aws_s3_bucket_notification resources.
There seems to be an eventual consistency component along with the serialization aspect of these requests. In one case, the request to PutBucketVersioning went out immediately after the response from PutBucketTagging came back and the bucket hadn't resolved its state yet.
aws_v3.4.0_x5: 2020/10/20 20:07:16 [DEBUG] [aws-sdk-go] DEBUG: Response s3/PutBucketTagging Details:
aws_v3.4.0_x5: 2020/10/20 20:07:16 [DEBUG] [aws-sdk-go] DEBUG: Request s3/PutBucketVersioning Details:
aws_v3.4.0_x5: 2020/10/20 20:07:16 [DEBUG] [aws-sdk-go] DEBUG: Response s3/PutBucketVersioning Details:
2020-10-20T20:07:16.622Z [DEBUG] plugin.terraform-provider-aws_v3.4.0_x5: <Error><Code>OperationAborted</Code><Message>A conflicting conditional operation is currently in progress against this resource. Please try again.</Message>
We've been intermittently experiencing a variety of OperationAborted: A conflicting conditional operation...
errors when creating new S3 buckets. For example:
In our case, we typically aren't creating aws_s3_bucket_policy
resources and have not seen from triaging Terraform debug logs that parallel calls to the S3 service are being made for the same bucket.
In each of the cases we've triaged, the S3 bucket can eventually be created and set with the desired configuration if enough retries are attempted. Generally, a single retry within a minute of the first failed try has been sufficient. In some cases, though, we've had wait for up to 45 minutes before a retry succeeds.
From descriptions I've seen of this error, it sounds like a common condition that would trigger this is recreating an S3 bucket which has just just been deleted or hitting a soft limit on the number of S3 buckets in an account. I don't believe either has been true in our case, though, since the number of buckets have been well under our account limit around the time of some failures and the buckets we're creating are most often unique.
In the implementation the aws_s3_bucket
in the Terraform AWS provider code, I'm not seeing code which would seem to be causing parallel S3 API calls for the same bucket to be made. We have seen at least occasionally that when Terraform makes an apparently successful Put call to an S3 API from the resourceAwsS3BucketUpdate
function that the corresponding Get call made from the resourceAwsS3BucketRead
function later might fail. For example, we saw a PutBucketLifecycleConfiguration
call return an HTTP 200 OK response but the subsequent GetBucketLifecycleConfiguration
call for the same bucket returned an HTTP 404 Not Found response. Perhaps this is due to eventual consistency issues with the AWS S3 service. The calls made from within resourceAwsS3BucketUpdate
generally don't appear to follow a Put with a Get to confirm that the S3 service has committed the desired configuration. Where multiple Put calls could be strung together (e.g., when encryption, lifecycle, and/or other features are all enabled together for the same S3 service), I wonder if adding Gets for each of the Puts would help serialize the configuration changes in AWS (and, therefore, help avoid "conflicting conditional operations"). It's hard to tell if this would help, though, since it's not really clear from the AWS error messaging what the "conflicting conditional operations" are in the event of a failure.
For the S3 server side encryption configuration one, we had put up a PR, which would add retries for 409 errors, although that PR has been up for over 10 months now without any attention. Perhaps it would make sense to add similar retries for each of the S3 API calls that the AWS provider could make. Unfortunately, though, the retry timeout might need to be very high (45 minutes or longer?) in order to reliably overcome the errors. It would be nice to figure out the sequences of events from interactions with the AWS S3 API that produce these errors so that we could avoid them (either in Terraform code or, at least where possible, within the Terraform AWS provider).
Most helpful comment
I think that
depends_on
would work as a workaround, but it does not actually depend on it, we should implement an appropriate error handling. Since similar problems can occur with many s3 related resources, I'm not sure how do we handle it is the best.