Terraform-provider-aws: aws_s3_bucket_object etag doesn't trigger new resource

Created on 22 Nov 2017  ·  15Comments  ·  Source: hashicorp/terraform-provider-aws

Hi there,

aws_s3_bucket_object etag doesn't trigger new resource

Terraform Version

Terraform v0.11.0
+ provider.archive v1.0.0
+ provider.aws v1.2.0
+ provider.null v1.0.0
+ provider.template v1.0.0

Affected Resource(s)

  • aws_s3_bucket_object

Terraform Configuration Files

resource "aws_s3_bucket_object" "ansible-roles" {
  bucket = "${aws_s3_bucket.config.id}"
  key    = "config/ansible/roles.zip"
  source = "config/ansible/roles.zip"
  etag   = "${data.archive_file.zip.output_md5}"
}

Expected Behavior

etag should force new resource so updated source file is uploaded to s3

Actual Behavior

just changes the value of the etag and doesn't update the s3 object

aws_s3_bucket_object.ansible-groups: Refreshing state... (ID: config/ansible/groups.yaml)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ aws_s3_bucket_object.ansible-roles
      etag: "adc8d9a264cb5105cf63e611b9c79f9c" => "5bc7c6f6d565dcb0a785cf998fe2b15d"


Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  1. terraform apply
  2. make changes to file
  3. terraform apply
bug servics3

Most helpful comment

:tumbleweed:

All 15 comments

Hmm, this seems quite similar to https://github.com/hashicorp/terraform/issues/3068. Hey @radeksimko (or others) is there a good reason why ForceNew isn't set on this attribute? Especially considering the documentation explicitly states Used to trigger updates.

This used to work correctly but recently etag doesn't trigger an update and therefore just changes the etag

Just ran into this as well.

$ terraform version
Terraform v0.11.1
+ provider.aws v1.5.0
+ provider.random v1.1.0
+ provider.template v1.0.0

Just found this issue and was sure I was having this problem. It was a user error.

Terraform v.0.11.1
...
- Downloading plugin for provider "aws" (1.6.0)...
- Downloading plugin for provider "archive" (1.0.0)...

@PhilStevenson -- any chance you can update the config you have to be:

resource "aws_s3_bucket_object" "ansible-roles" {
  bucket = "${aws_s3_bucket.config.id}"
  key    = "config/ansible/roles.zip"
  source = "${data.archive_file.zip.output_path}"
  etag   = "${data.archive_file.zip.output_md5}"
}

@moofish32 I tried the changes you purposed and still getting the same error.

:tumbleweed:

I still have this issue as well. Trying to use this to upload lambda packages.

Does anyone have a workaround for this? I can't find any way to force-taint a resource via a lifecycle rule...

@lorengordon we noticed this behaviour "seems fixed" in 11.13?

@farmerbean Hmm, but that's a terraform core version, and this issue affects the aws provider. I'm seeing the problem in the current version of the aws provider, v2.20.0.

sorry @lorengordon you're quite right; early morning. Let me see what version of the provider we're using. We did genuinely see that behaviour go away..

I created an acceptance test to verify that this problem exists with the latest versions, and a PR (#9579) to fix the issue. Please 👍 the PR if it will work for you.

$ terraform -v
Terraform v0.12.6
+ provider.aws v2.21.1

@bflad

Bottom Line

I cannot reproduce this issue today. Within the last 21 days (between 7/31/19 and 8/20/19), something has changed to fix the issue. Even using the versions in @PhilStevenson's original 11/22/17 post, the problem doesn't exist.

Fixed by HashiCorp??

As much as I'd like to give you guys credit, the fix could not have been made by HashiCorp (and contributors). I've gone back in time to versions of Terraform v0.11.0-0.12.6 and the AWS provider v1.2.0-2.21.1. We knew the problem existed using those versions but now does not. The earliest I tried is:

$ terraform -v
Terraform v0.11.0
+ provider.aws v1.2.0

Fixed in AWS SDK Go??

No, the fix could not be a recent fix to the AWS SDK Go. AWS provider v1.2.0 uses AWS SDK Go v1.12.19, which is 2 years old. The fix happened in the last 21 days.

Fixed in the underlying AWS service API??

It must have been. It seems the most likely that since at least v.1.2, the AWS provider was attempting to do the right thing but AWS itself had an issue. At some point between 7/31/19 and 8/20/19, the issue was silently fixed on the AWS side.

Test to verify underlying AWS service API was fixed

Step 1 - Install Terraform v0.11.0

Step 2 - Create a local file called rando.txt

Add some memorable text to the file so you can verify changes later. Don't use Terraform to supply the content in order to recreate the situation leading to the issue.

Step 3 - Config: terraform init/terraform apply

This config makes a bucket and two objects: one using archive_file and the other directly uploading a local file. _(filemd5() wasn't available in Terraform v.0.11.0.)_

provider "aws" {
  version = "<= 1.2"
}

provider "archive" { 
  version = "<= 1.0"
}

resource "aws_s3_bucket" "config" {
  bucket = "tf-objects-test-bucket-d38245f48421"
}

resource "aws_s3_bucket_object" "roles" {
  bucket = "${aws_s3_bucket.config.id}"
  key    = "roles.zip"
  source = "${data.archive_file.zip.output_path}"
  etag   = "${data.archive_file.zip.output_md5}"
}

data "archive_file" "zip" {
  type        = "zip"
  source_file = "rando.txt"
  output_path = "doodah.zip"
}

resource "aws_s3_bucket_object" "another" {
  bucket = "${aws_s3_bucket.config.id}"
  key    = "rando.txt"
  source = "rando.txt"
  etag   = "${md5(file("rando.txt"))}"
}

Step 4 - Check object contents on S3

You should see that the objects have the content you created.

Step 5 - Change the contents of rando.txt

Make a memorable change to the file outside of Terraform.

Step 6 - terraform apply again

Previously, changing only the etag and not the source would create an error on the second apply and not upload the changed file to S3. However, now this step finishes without error.

Step 7 - Check object contents on S3

You should see that the objects both have the updated content you changed out-of-band. It works where it didn't before.

Thanks for all the investigative work, @YakDriver. Since this appears to be an upstream API issue that was resolved and we added a covering acceptance test to check for API regressions, closing this. 👍

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings