Hi there,
aws_s3_bucket_object etag doesn't trigger new resource
Terraform v0.11.0
+ provider.archive v1.0.0
+ provider.aws v1.2.0
+ provider.null v1.0.0
+ provider.template v1.0.0
resource "aws_s3_bucket_object" "ansible-roles" {
bucket = "${aws_s3_bucket.config.id}"
key = "config/ansible/roles.zip"
source = "config/ansible/roles.zip"
etag = "${data.archive_file.zip.output_md5}"
}
etag should force new resource so updated source file is uploaded to s3
just changes the value of the etag and doesn't update the s3 object
aws_s3_bucket_object.ansible-groups: Refreshing state... (ID: config/ansible/groups.yaml)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_s3_bucket_object.ansible-roles
etag: "adc8d9a264cb5105cf63e611b9c79f9c" => "5bc7c6f6d565dcb0a785cf998fe2b15d"
Plan: 0 to add, 1 to change, 0 to destroy.
terraform applyterraform applyHmm, this seems quite similar to https://github.com/hashicorp/terraform/issues/3068. Hey @radeksimko (or others) is there a good reason why ForceNew isn't set on this attribute? Especially considering the documentation explicitly states Used to trigger updates.
This used to work correctly but recently etag doesn't trigger an update and therefore just changes the etag
Just ran into this as well.
$ terraform version
Terraform v0.11.1
+ provider.aws v1.5.0
+ provider.random v1.1.0
+ provider.template v1.0.0
Just found this issue and was sure I was having this problem. It was a user error.
Terraform v.0.11.1
...
- Downloading plugin for provider "aws" (1.6.0)...
- Downloading plugin for provider "archive" (1.0.0)...
@PhilStevenson -- any chance you can update the config you have to be:
resource "aws_s3_bucket_object" "ansible-roles" {
bucket = "${aws_s3_bucket.config.id}"
key = "config/ansible/roles.zip"
source = "${data.archive_file.zip.output_path}"
etag = "${data.archive_file.zip.output_md5}"
}
@moofish32 I tried the changes you purposed and still getting the same error.
:tumbleweed:
I still have this issue as well. Trying to use this to upload lambda packages.
Does anyone have a workaround for this? I can't find any way to force-taint a resource via a lifecycle rule...
@lorengordon we noticed this behaviour "seems fixed" in 11.13?
@farmerbean Hmm, but that's a terraform core version, and this issue affects the aws provider. I'm seeing the problem in the current version of the aws provider, v2.20.0.
sorry @lorengordon you're quite right; early morning. Let me see what version of the provider we're using. We did genuinely see that behaviour go away..
I created an acceptance test to verify that this problem exists with the latest versions, and a PR (#9579) to fix the issue. Please 👍 the PR if it will work for you.
$ terraform -v
Terraform v0.12.6
+ provider.aws v2.21.1
@bflad
I cannot reproduce this issue today. Within the last 21 days (between 7/31/19 and 8/20/19), something has changed to fix the issue. Even using the versions in @PhilStevenson's original 11/22/17 post, the problem doesn't exist.
As much as I'd like to give you guys credit, the fix could not have been made by HashiCorp (and contributors). I've gone back in time to versions of Terraform v0.11.0-0.12.6 and the AWS provider v1.2.0-2.21.1. We knew the problem existed using those versions but now does not. The earliest I tried is:
$ terraform -v
Terraform v0.11.0
+ provider.aws v1.2.0
No, the fix could not be a recent fix to the AWS SDK Go. AWS provider v1.2.0 uses AWS SDK Go v1.12.19, which is 2 years old. The fix happened in the last 21 days.
It must have been. It seems the most likely that since at least v.1.2, the AWS provider was attempting to do the right thing but AWS itself had an issue. At some point between 7/31/19 and 8/20/19, the issue was silently fixed on the AWS side.
rando.txtAdd some memorable text to the file so you can verify changes later. Don't use Terraform to supply the content in order to recreate the situation leading to the issue.
terraform init/terraform applyThis config makes a bucket and two objects: one using archive_file and the other directly uploading a local file. _(filemd5() wasn't available in Terraform v.0.11.0.)_
provider "aws" {
version = "<= 1.2"
}
provider "archive" {
version = "<= 1.0"
}
resource "aws_s3_bucket" "config" {
bucket = "tf-objects-test-bucket-d38245f48421"
}
resource "aws_s3_bucket_object" "roles" {
bucket = "${aws_s3_bucket.config.id}"
key = "roles.zip"
source = "${data.archive_file.zip.output_path}"
etag = "${data.archive_file.zip.output_md5}"
}
data "archive_file" "zip" {
type = "zip"
source_file = "rando.txt"
output_path = "doodah.zip"
}
resource "aws_s3_bucket_object" "another" {
bucket = "${aws_s3_bucket.config.id}"
key = "rando.txt"
source = "rando.txt"
etag = "${md5(file("rando.txt"))}"
}
You should see that the objects have the content you created.
rando.txtMake a memorable change to the file outside of Terraform.
terraform apply againPreviously, changing only the etag and not the source would create an error on the second apply and not upload the changed file to S3. However, now this step finishes without error.
You should see that the objects both have the updated content you changed out-of-band. It works where it didn't before.
Thanks for all the investigative work, @YakDriver. Since this appears to be an upstream API issue that was resolved and we added a covering acceptance test to check for API regressions, closing this. 👍
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
:tumbleweed: