Terraform v0.11.11
resource "google_bigquery_dataset" "ProjectLogging" {
dataset_id = "ProjectLogging"
friendly_name = "ProjectLogging"
location = "US"
project = "${google_project.project.project_id}"
labels {
env = "${var.environment}"
}
}
resource "google_logging_project_sink" "ProjectSink" {
name = "ProjectLogging"
destination = "bigquery.googleapis.com/projects/${google_project.project.project_id}/datasets/ProjectLogging"
project = "${google_project.project.project_id}"
filter = "resource.type = project"
depends_on = ["google_bigquery_dataset.ProjectLogging"]
unique_writer_identity = true
}
We use big query datasets to store our logging data. So in terraform I need to be able to
We're unable to use IAM (google_project_iam_binding) due to our security requirements.
I tried the below just to see what would happen and terraform complained about a cycle.
resource "google_bigquery_dataset" "ProjectLogging" {
dataset_id = "ProjectLogging"
friendly_name = "ProjectLogging"
location = "US"
project = "${google_project.project.project_id}"
labels {
env = "${var.environment}"
}
access = {
role = "WRITER"
user_by_email = "${google_logging_project_sink.ProjectSink.writer_identity}"
}
}
resource "google_logging_project_sink" "ProjectSink" {
name = "ProjectLogging"
destination = "bigquery.googleapis.com/projects/${google_project.project.project_id}/datasets/ProjectLogging"
project = "${google_project.project.project_id}"
filter = "resource.type = project"
depends_on = ["google_bigquery_dataset.ProjectLogging"]
unique_writer_identity = true
}
N/A
Using the standard method of a service account and not doing anything else I'd consider to be "odd".
Related: #2051 for allowing IAM policies on a particular BigQuery dataset
I ran into this today and I'm curious what the recommendation for this is.
access is used for this, and I guess it would require separate resources and dataset.patch calls?I'm thinking my workaround right now might be to use a null resource with a local provisioner or something to use the gCloud CLI to get the job done, but ideally this could be 100% TF.
If it's helpful, for anyone who might need something of a workaround for now, I ended up doing the following. Less than ideal, but gets the job done for now (Terraform + accompanying set-flow-log-writer-access.sh used by the provisioner)
resource "google_logging_project_sink" "flow_log_sink" {
name = "flow-logs"
destination = "bigquery.googleapis.com/projects/${data.google_project.project.project_id}/datasets/${google_bigquery_dataset.flow_logs.dataset_id}"
filter = "resource.type = gce_subnetwork AND logName=projects/${data.google_project.project.project_id}/logs/compute.googleapis.com%2Fvpc_flows"
unique_writer_identity = "true"
}
resource "google_bigquery_dataset" "flow_logs" {
..
# all the below workarounds (lifecycle + null resource with exec) is related to
# not being able to correctly attach the sink writer to the bigquery dataset.
# see: https://github.com/terraform-providers/terraform-provider-google/issues/3012
#
# access {
# role = "WRITER"
# user_by_email = "${google_logging_project_sink.flow_log_sink.writer_identity}"
# }
lifecycle {
ignore_changes = ["access"]
}
access {
role = "OWNER"
special_group = "projectOwners"
}
access {
role = "READER"
special_group = "projectReaders"
}
}
resource "null_resource" "flow_log_access" {
count = "${var.collect_flow_logs}"
triggers = {
writer_identity = "${google_logging_project_sink.flow_log_sink.writer_identity}"
}
provisioner "local-exec" {
command = "${path.module}/set-flow-log-writer-access.sh ${data.google_project.project.project_id} ${google_bigquery_dataset.flow_logs.dataset_id} ${google_logging_project_sink.flow_log_sink.writer_identity}"
}
}
#!/bin/bash
set -e
if [[ -z "$1" || -z "$2" || -z "$3" ]]; then
echo "pass [project id] [dataset id] [writer identity]"
exit 1
fi
if ! [ -x "$(command -v bq)" ]; then
echo "bq cli (bigquery cli, from gcloud cli) is not available in path"
exit 1
fi
project=$1
dataset=$2
writer=${3#"serviceAccount:"}
temp_file=$(mktemp)
trap "rm -f ${temp_file}" EXIT
bq show --format=prettyjson --project ${project} ${dataset} \
| jq --arg writer ${writer} '.access | . += [{"role": "WRITER", "userByEmail": $writer }] | {"access": .}' > ${temp_file}
bq update --source ${temp_file} ${project}:${dataset}
We just ran into the same issue, the only way around it that I can think of is if there were a new resource named something like "google_bigquery_dataset_access" and you specified all of the access to a specific dataset there. Otherwise, there is a circular dependency you cannot get around except with @chrisboulton's hack.
@gosseljl close this issue as the fix (#3012) was applied. Please feel free to reopen it if you still encounter the issue. Thanks
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
If it's helpful, for anyone who might need something of a workaround for now, I ended up doing the following. Less than ideal, but gets the job done for now (Terraform + accompanying
set-flow-log-writer-access.shused by the provisioner)