terraform -v
Terraform v0.11.2
...
terraform {
backend "gcs" {
bucket = "xxxxxxxx-shared-services"
credentials = "xxxxxxxxxxxx.json"
prefix = "demo/demo.tfstate"
project = "xxxxxxxx-shared-services"
}
}
data "terraform_remote_state" "foo" {
backend = "gcs"
config {
bucket = "terraform-state"
prefix = "prod"
}
}
resource "google_compute_network" "default" {
name = "demo"
auto_create_subnetworks = "true"
}
...
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.terraform_remote_state.gcs: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
Plan: 1 to add, 0 to change, 0 to destroy.
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.terraform_remote_state.gcs: Refreshing state...
Error: Error refreshing state: 1 error(s) occurred:
data.terraform_remote_state.gcs: 1 error(s) occurred:
data.terraform_remote_state.gcs: data.terraform_remote_state.gcs: error loading the remote state: Failed to open state file at gs://xxxxxxxx-shared-services/prod/default.tfstate: googleapi: got HTTP response code 403 with body: AccessDenied
terraform init
terraform plan
The error is referring to it using a different service account which the key is based from. it is actually trying to use a default service account in the wrong project.
I had a similar issue, I think.
I did the following:
1) create a master project
2) create a service account in master project
3) create a secondary project
4) add service account from master account to secondary project
5) run terraform from a container in the master project but try and modify resources in the secondary project
I fixed the issue by setting: GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/service_account.json
Hope this helps
For anyone who would like to use user credentials instead. You can update your ~/.config/gcloud/application_default_credentials.json
file by using the following gcloud
command:
gcloud auth application-default login
https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login
I’m having the same issue. Wondering what is behind this discrepancy
main.tf
// Configure the Google Cloud provider
provider "google" {
credentials = "${file("<creds_file_location.json>")}"
project = "<project name>"
region = "us-west1"
}
terraform {
backend "gcs" {
bucket = "tf-bucket"
prefix = "terraform"
credentials = "<creds_file_location.json>"
}
}
creds_file_location.json
{
"type": "service_account",
"project_id": "<project-id>",
"private_key_id": "<private_key_id>",
"private_key": "-----BEGIN PRIVATE KEY----- <private key>",
"client_email": "1-robot@...>",
"client_id": "<client-id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/1-robot%..."
}
Trying to initialize backend
terraform init
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403: other-robot@... does not have storage.objects.list access to tf-bucket, forbidden
Other factors:
The 1-robot Service account was given admin access to the tf-bucket storage
I solved this issue by running
terraform init -reconfigure
This bug remains open in the latest version:
Terraform v0.12.26
Same issue Terraform v0.12.26
I got the issue using the same credentials too:
Error: Error loading state:
Failed to open state file at gs://poc-terraform/tfstates/default.tfstate: Get https://storage.googleapis.com/poc-terraform/tfstates/default.tfstate: Forbidden
terraform init -reconfigure
got same issue
@kladiv most probably is your proxy that is failing with HTTP/403 for the given request to GCP. Please try without proxy. In my case that was the problem.
Same problem here:
Terraform v0.13.0
+ provider registry.terraform.io/hashicorp/google v3.35.0
+ provider registry.terraform.io/hashicorp/google-beta v3.35.0
+ provider registry.terraform.io/hashicorp/local v1.4.0
Having this problem again. GCP roles are simply defective.
To anyone with the same issue as I and some other commenter encountered, here is how I fixed it.
Issue:
Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403: <service-account>@<domain>... does not have storage.objects.list access to tf-state, forbidden
Explanation:
I got the error message after configuring the gcs storage backend. I followed the documentation a bit too close and used the same bucket name, tf-state
as in the documentation. The documentation gave no indication that the bucket needed to be created manually in advance, so I expected the gcs storage backend to handle the bucket creation.
When I got the does not have storage.objects.list access to tf-bucket
I triple checked my configuration confirming that the terraform service account had the correct permissions; it did. After brainstorming with a colleague I tried creating the bucket manually in the gcloud web ui and got a message saying the name was already taken, which made me understand that of course the name would be taken as names need to be globally unique for the entire gcloud service (all organisations and projects).
Solution:
So to fix the problem I manually created a new bucket with a globally unique name, I created one on the form tf-state-<project-id>
and updated the terraform backend configuration with this name. This fixed the issue and I could run terraform init
without issue.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I solved this issue by running