This is an interesting one. Given this setup:
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "${var.aws_region}"
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "this-is-the-greatest-terraform-test-evahr-bucket"
}
resource "template_file" "my_template" {
template = "some template text"
provisioner "local-exec" {
command = "echo '${template_file.my_template.rendered}' | zip /tmp/${md5(template_file.my_template.rendered)}.zip -"
}
}
resource "aws_s3_bucket_object" "my_bucket_object" {
depends_on = [
"template_file.my_template",
]
bucket = "${aws_s3_bucket.my_bucket.id}"
key = "my_bucket_object.zip"
content = "${file("/tmp/${md5(template_file.my_template.rendered)}.zip")}"
}
aws_s3_bucket_object.my_bucket_object fails because the file created by local-exec in template_file.my_template does not exist.
This seems to suggest that the dependency doesn't wait for the provisioner to finish before executing dependent resources.
+1
I have the same issue. As the result, when I am trying to create aws_s3_bucket_object resource I get this intermittent failure:
aws_s3_bucket.private: Error putting S3 ACL: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: DCB82C8687CAB572
And it is gone after another applying.
+1
I haven't been able to reproduce this and I even wrote a unit test to verify it. Currently, resources+provisioners are treated as an atomic unit so any dependencies MUST wait for provisioners to complete before continuing. The part where core says "this is now done, move on to things that depend on me" is _after_ provisioner runs.
So I feel like something else may be happening here, though I'm not sure what... perhaps a file flushing issue?
If someone has more info please let me know and I'll reopen. But at the moment I spent the past 5 to 10 minutes coming up with test cases and wasn't able to make this behavior happen.
Now that there is an archive data source I don't have a need for this particular use case. :)
@mitchellh Take a look to the log I attached.
console-log.txt
I am not totally sure that it is related, but it is definitely an issue. I actually had to add an additional attempt to my deployment script, but it fails sometimes anyway.
Some parts of my deployment script:
PLAN="/tmp/deployment.tfplan"
ARGS="-var build_number=1 -var-file=mode/demo.tfvars -out=$PLAN"
echo "Recreate S3 buckets..."
if /opt/terraform/terraform show | grep -q "aws_s3_bucket.private"; then
/opt/terraform/terraform taint aws_s3_bucket.private
fi
if /opt/terraform/terraform show | grep -q "aws_s3_bucket.public"; then
/opt/terraform/terraform taint aws_s3_bucket.public
fi
/opt/terraform/terraform plan $ARGS
/opt/terraform/terraform apply $PLAN || (sleep 5 && /opt/terraform/terraform apply $PLAN)
I see this failure several times per a week. Let me know how I can dump/debug anything and I will help to reproduce this issue.
The part of my configuration is here - main.tf
@abguy This looks unrelated to this bug since there are no provisioners, but this does look like an issue with the AWS resource.
Does it makes sense to report a new bug?
Yes I believe so.
Posted it to #10121
Reopening this bug because I am seeing this happen in my Terraform code. Here's how I have my code setup:
resource "aws_s3_bucket" "bucket" {
bucket = "${var.prefix}-test"
acl = "public-read"
website {
index_document = "index.html"
error_document = "error.html"
}
provisioner "local-exec" {
command = "git clone https://github.com/org_name/repo_name /tmp/frontend"
}
}
resource "aws_s3_bucket_object" "html" {
depends_on = ["aws_s3_bucket.bucket"]
bucket = "${aws_s3_bucket.bucket.id}"
key = "index.html"
acl = "public-read"
source = "/tmp/frontend/index.html"
}
Given what I understand about Terraform and how the depends_on attribute works, I would _expect_ my aws_s3_bucket_object resource to wait until my aws_s3_bucket is completed, including the provisioner. Instead, Terraform attempts to run both resources at once which causes the whole thing to fail.
@mitchellh Uh, not sure how hard you tried to reproduce but this still happens v0.11.13
I have a heroku build which uses a local source because the heroku build provider doesn't accept authentication (hack .1). This works exactly as the example above, with a local exec and the resource depending on it.
But (hack .2) the source path to the git repo has to exist for plan to pass execution, and local-exec is not run during plan (obviously). So then you have a chicken and egg scenario, you can't use /tmp at all for this stuff, you need some ./.tmp stubbed in your repo that you then nuke and re-clone and then a gun in your desk drawer to shoot yourself with when you realise how ugly this becomes just to solve a really really trivial issue and you long for the days when you had just written a bash script with ssh and expect. Then you remember how horrific it was writing expect scripts, so you quit IT and become a farmer instead. Modern society quickly collapses, the internet runs dry. Aliens attack, helpless we become their slaves.
All because you wouldn't let our local-exec work properly in the dependency graph.
Okay I'm exaggerating but please do consider re-opening this issue.
Hi all,
Once a closed bug gets as old as this one, new issues that seem similar are often not the same root cause, because the code in question has changed significantly since the issue was originally filed. That is particularly true in this case, where the handling of provisioners as part of the resource lifecycle has been changed quite a bit since March 2016, and the members of the Terraform team have changed since this issue was last discussed.
If you are seeing a problem that seems similar to this old issue, please open a new bug report, completing the issue template to gather all of the necessary context for debugging, and then the team can attempt to reproduce using the information you've provided.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Reopening this bug because I am seeing this happen in my Terraform code. Here's how I have my code setup:
Given what I understand about Terraform and how the depends_on attribute works, I would _expect_ my aws_s3_bucket_object resource to wait until my aws_s3_bucket is completed, including the provisioner. Instead, Terraform attempts to run both resources at once which causes the whole thing to fail.