Terraform-provider-google: google_storage_bucket_acl role_entities don't work

Created on 13 Jun 2017  ·  9Comments  ·  Source: hashicorp/terraform-provider-google

_This issue was originally opened by @mikemcrill as hashicorp/terraform#10612. It was migrated here as part of the provider split. The original body of the issue is below._


Terraform Version

Terraform v0.7.13

Affected Resource(s)

  • google_storage_bucket_acl

Terraform Configuration Files

resource "google_storage_bucket_acl" "scripts-acl" {
  bucket = "${google_storage_bucket.scripts.name}"
  default_acl = "private"

  role_entity = [
    "OWNER:project-owners-123456",
    "READER:[email protected]",                                                                                          "READER:[email protected]",
    "READER:[email protected]",
  ]
 }

Debug Output

https://gist.github.com/mikemcrill/b43cb01767812985338b9bb890da4a9b

Expected Behavior

ACL should be applied without errors

Actual Behavior

ACL tries to delete owner and fails

Steps to Reproduce

Apply terraform with the above state

Important Factoids

I tried with and without referencing the OWNER entity, same effect. Terraform keeps trying to destroy the OWNER permission.

bug

All 9 comments

Any updates on this one? not being able to use terraform for bucket acl's is making this resource unusable

Having this problem with 0.9.11 as well, including the reappearing work issue described in https://github.com/hashicorp/terraform/issues/10612#issuecomment-303601680

I differ from the parent bug however. By removing a reference to OWNER in my ACL I'm able to create additional permissions, without having terraform attempt to delete the real owner.

It also seems worth mentioning that the permissions Terraform creates are of the older Storage Legacy Bucket variety.

Is this still an ongoing issue? I've got a working config at the moment that adds a user in the OWNER role, and it's working without a hitch for me. Does anyone have a minimal reproduction handy?

This is our case which is affected, the ACL will be updated on every apply

resource "google_storage_bucket" "blubb_bucket" {
  name     = "${var.bucket_name}"
  location = "${var.location}"
  storage_class = "${var.storage_class}"

  website {
    main_page_suffix = "index.html"
  }
}

resource "google_storage_bucket_acl" "blubb_bucket" {
  bucket = "${google_storage_bucket.blubb_bucket.name}"

  role_entity = [
    "READER:AllUsers",
  ]
}

gsutil acl:

[
  {
    "entity": "project-owners-1234",
    "projectTeam": {
      "projectNumber": "1234",
      "team": "owners"
    },
    "role": "OWNER"
  },
  {
    "entity": "project-editors-1234",
    "projectTeam": {
      "projectNumber": "1234",
      "team": "editors"
    },
    "role": "OWNER"
  },
  {
    "entity": "project-viewers-1234",
    "projectTeam": {
      "projectNumber": "1234",
      "team": "viewers"
    },
    "role": "READER"
  },
  {
    "entity": "allUsers",
    "role": "READER"
  }
]

Apply output:

~ module.infrastructure.foo.google_storage_bucket_acl.blubb_bucket
    role_entity.#: "0" => "1"
    role_entity.0: "" => "READER:AllUsers"

Terraform v0.9.11

Thanks! That clarified things immensely.

I've got good news and bad news.

  • Good news: I've found the cause of this, and opened a PR (#358) to address it.
  • Bad news: because this is a breaking change (even though, technically, the former behaviour was just incorrect) I think it best we wait for 1.0.0 to release this.
  • Good news: 1.0.0 was going to be (unless something's changed) the next release, anyways.
  • Bad news: even then, your config still won't work; you'll get a permanent diff from "READER:AllUsers" to "READER:allUsers".
  • Good news: if you fix that now, your config works as expected even without the fix.
  • Bad news: You'll have to add the project-owners-.... ACLs and the other default configs into your ACL, or the _will be removed_, which may cause problems for you. This is divergent from the current behaviour, but how the resource was intended to work the entire time.

Sorry for the rollercoaster, there. Hopefully that solution helps you out now, and the PR makes the problem clearer in the future. :)

Ah, sorry, misclick.

This has been merged to master, and will be released with the next release.

Hi,

We seemed to have found a regression or an edge case for the google_storage_object_acl resource that is very similar to this one.

The terraform version is: v0.11.13

The file permissions were lost after a re-upload of the same files. Terraform detected the changes because the apply was done on different computers that lead to different paths for the uploaded files.

We've made a regression test. I'm sharing it as it helps understanding the use case.

package test

import (
    "testing"

    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/gruntwork-io/terratest/modules/gcp"
    "github.com/gruntwork-io/terratest/modules/http-helper"
    "time"
    "github.com/gruntwork-io/terratest/modules/test-structure"
)

//This test simulates as if the tf code was applied on two terraform operators computers
//The use case is: operator 1 will apply the terraform code from his computer
//and some time later operator 2 will apply it from his computer, making the paths
//from the js tracker to change which leads to terraform detecting changes and forcing
//a re-upload as it thinks the files are different because the paths between operator 1 and 2
//are different.

func TestGCPJSTracker(t *testing.T) {
    t.Parallel()

    options := &terraform.Options{
        TerraformDir: "/Users/polar_bear/work/terraform-modules/gcp_bucket_with_cdn/0.1.0,
        NoColor: true,
    }

    defer terraform.Destroy(t, options)

    //Simulate a terraform apply from a 1st computer
    terraform.Init(t, options)
    terraform.WorkspaceSelectOrNew(t, options, "ice_rink")
    terraform.Apply(t, options)

    tmpDir := test_structure.CopyTerraformFolderToTemp(t, "/Users/polar_bear/work/terraform-modules",
        "gcp_bucket_with_cdn/0.1.0")
    options.TerraformDir = tmpDir

    //Simulate a terraform apply from a 2nd computer
    terraform.Init(t, options)
    terraform.WorkspaceSelectOrNew(t, options, "ice_rink")
    terraform.Apply(t, options)

    gcp.AssertStorageBucketExists(t, "ice_rink_bucket")

    http_helper.HttpGetWithRetryWithCustomValidation(t,
        "https://ice_rink_bucket.example.com/file_uploade_by_tf.js",
        60,
        10 * time.Second,
        func(statusCode int, body string) bool {
            return statusCode == 200
        })
}

The terraform module creates a bucket with a public cdn and then uploads some files that are made public using the google_storage_object_acl resource.

Regards

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings