Terraform-provider-aws: Provisioning CodeBuild - "Error: cache location is required when cache type is "S3""

Created on 22 Sep 2019  路  6Comments  路  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.12.7

Affected Resource(s)

  • aws_codebuild_project

Terraform Configuration Files

Downloadable here: https://drive.google.com/file/d/1QM47TzsCFfpNB0KtspWwJoiOZ6uWYbqV/view?usp=sharing

Expected Behavior

CodeBuild is provisioned

Actual Behavior

Error: cache location is required when cache type is "S3"

  on codebuild/codebuild-tests.tf line 1, in resource "aws_codebuild_project" "codebuild_tests":
   1: resource "aws_codebuild_project" "codebuild_tests" {

References

  • #4639
needs-triage serviccodebuild

Most helpful comment

However, using aws_s3_bucket.codebuild.bucket instead of aws_s3_bucket.codebuild.id works.

All 6 comments

Exact line in the Terraform code where this is triggered: https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_codebuild_project.go#L579

I have no idea why this happens though. Maybe client side validation should be skipped when dynamic resource tokens are present?

I figured out a workaround:

The S3 bucket resource that I was referencing, had the following structure:

resource "aws_s3_bucket" "ci_bucket" {
  bucket_prefix = "${lower(var.tag)}-${var.branch_name}-cb"
}

Solution was to replace it with:

resource "aws_s3_bucket" "ci_bucket" {
  bucket = "${lower(var.tag)}-${var.branch_name}-cb"
}

Want to add my experience, the issue seems to be intermittent when trying to plan/apply AND when trying to delete. A template applied and was unchanged, but can't proceed past refreshing state on a destroy due to identical error:

Error: cache location is required when cache type is "S3"

  on pipelines.tf line 7, in resource "aws_codebuild_project" "build":
   7: resource "aws_codebuild_project" "build" {

example of the cache settings on the resource:

resource "aws_codebuild_project" "build" {
...
    cache {
        location = "${module.s3_bucket.bucket_id}/build_cache"
        modes    = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
        type     = "S3"
    }
...

I ran into the same thing.

The following does not work, and it also doesn't work if the aws_s3_bucket resource is in a different module and it exposes the name as an output.

resource "aws_s3_bucket" "codebuild" {
  bucket = var.codebuild_bucket_name
  acl    = "private"
}

resource "aws_codebuild_project" "project" {
  name = var.name

  cache {
    type     = "S3"
    location = "${aws_s3_bucket.codebuild.id}/${var.name}"
  }
# ...more stuff
}

My workaround was to pass the name directly to the aws_codebuild_project, with the downside that it doesn't create the dependency between the two resources.

resource "aws_s3_bucket" "codebuild" {
  bucket = var.codebuild_bucket_name
  acl    = "private"
}

resource "aws_codebuild_project" "project" {
  name = var.name

  cache {
    type     = "S3"
    location = "${var.codebuild_bucket_name}/${var.name}"
  }
# ...more stuff
}

However, using aws_s3_bucket.codebuild.bucket instead of aws_s3_bucket.codebuild.id works.

We have the same error, we use aws_s3_bucket.codebuild.bucket. I've tried switching from BUCKET to ID and back but always the same error.

Was this page helpful?
0 / 5 - 0 ratings