Terraform-provider-google: oauth scopes on google_container_cluster/google_container_node_pool should default to default

Created on 30 Aug 2018  路  4Comments  路  Source: hashicorp/terraform-provider-google

When you create a 1.10 cluster via console or CLI with no specified scopes your cluster gets a storage-ro scope, however it does not when you create via terraform.

The changes in 1.10 are explained: https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes the important part being:

Beginning with Kubernetes version 1.10, gcloud and GCP Console no longer grants the compute-rw access scope on new clusters and new node pools by default. Furthermore, if --scopes is specified in gcloud container create, gcloud no longer silently adds compute-rw or storage-ro.

This documentation is kind of misleading: https://www.terraform.io/docs/providers/google/r/container_cluster.html#oauth_scopes (compute-rw is not required)

If no oauth_scopes are present it would appear to request the creation of a cluster with an empty array. This has the effect of giving only the required logging & monitoring scopes.

Given that this is not the default upstream, i think this should be changed.

If no scopes are specified it should be implied that you get the default upstream which is logging, monitoring and storage-ro.

If the empty array is present then the empty array should be submitted resulting in the logging & monitoring scopes.

If scopes are provided then they should just be used.

Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool
documentation upstream-terraform

Most helpful comment

Nevermind, the docs are a bit ambiguous on this but essentially you need both:

  • scopes on the instance; and
  • IAM roles on the service account

The confusing pieces of the docs:

https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes

Access scopes are the legacy method of specifying permissions for your nodes, and for workloads running on your nodes if the workloads are using application default credentials (ADC).

https://cloud.google.com/compute/docs/access/service-accounts#service_account_permissions

When you set up an instance to run as a service account, the level of access the service account has is determined by the combination of access scopes granted to the instance and IAM roles granted to the service account. You need to configure both access scopes and IAM roles to successfully set up an instance to run as a service account

https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#best_practices

Grant the instance the https://www.googleapis.com/auth/cloud-platform scope to allow full access to all Google Cloud APIs, so that the IAM permissions of the instance are completely determined by the IAM roles of the service account.

It begs the question - if scopes are considered "legacy", why do we still need them?
For now I'll go ahead and follow their best practice of supplying the cloud-platform scope to all nodes

All 4 comments

This documentation is kind of misleading

If I had to guess, the documentation was current when it was written, and did not update to track the changes when upstream changed. I'll label this with documentation to get that updated.

If no oauth_scopes are present it would appear to request the creation of a cluster with an empty array. This has the effect of giving only the required logging & monitoring scopes.

Given that this is not the default upstream, i think this should be changed.

If no scopes are specified it should be implied that you get the default upstream which is logging, monitoring and storage-ro.

If the empty array is present then the empty array should be submitted resulting in the logging & monitoring scopes.

If scopes are provided then they should just be used.

This behaviour is reasonable and I don't necessarily disagree with it _in theory_, but we face some technical limitations on it, currently. As it stands, it's either impossible, or nearly so, to determine whether a list (or any other type) is not present in a config, or if it's set to the empty value. Meaning we have no real good way of distinguishing between scopes = [] and scopes not being set at all. Because of this, we can't implement the behaviour you've outlined above. The changes coming in Terraform 0.12 lay the groundwork for us potentially being able to make that distinction, but it's unclear right now whether they will land in 0.12, in a later version, or even at all.

My preference at the moment is to wait and see if we can get the tools to make that distinction, because we want to preserve the ability to 1) define exactly which scopes you want 2) ask for the minimum possible set of scopes and 3) ask for the default set of scopes. I think all three of those are reasonable use cases, and I don't want to force an opinion on people about which they should use. At the moment, I think we can approximate the only one we're missing ("use the default set") by asking users to specify those manually, and I also think that's a rather non-intrusive workaround, as workarounds go. However, if that workaround is too constraining for you, and we need to find a solution before we get the ability to determine whether a field is set to its empty value or unset, I'd love to hear more about it.

As it stands, it's either impossible, or nearly so, to determine whether a list (or any other type) is not present in a config, or if it's set to the empty value

I was wondering if that was the case, that does make it more difficult.

We could theoretically make the default value [logging, monitoring, storage-ro], but that is once again terraform being opinionated about what the upstream default should be and not something you want to have to keep in sync over time.

At the moment, I think we can approximate the only one we're missing ("use the default set") by asking users to specify those manually, and I also think that's a rather non-intrusive workaround, as workarounds go.

I am entirely fine with this and agree that maybe the outcome of this bug is simply some better documentation as the defaults have changed and many of the blog posts are now not quite correct.

Personally now that I know what the cause is I'm happy to maintain the list of scopes. I filed the bug because missing storage-ro means you can't read from GCR and I was scratching my head for a couple of hours this afternoon wondering why this thing had broken in the last month or so.

I'm not really sure exactly how this would get called out. Ideally it would be at the top of the doc rather than in the oauth_scopes parameter, but the existing sentence "The following scopes are necessary to ensure the correct functioning of the cluster:" implied to me that if i was to make changes to the scopes i should make sure to include these things, not that i must include these things by default to make the cluster work as you would expect.

Somewhat related question - does anyone know how to implement storage-ro using only IAM roles? I've tried adding roles/storage.objectView to the default service account with no luck - the pods are still unable to pull private images from the projects GCR

Nevermind, the docs are a bit ambiguous on this but essentially you need both:

  • scopes on the instance; and
  • IAM roles on the service account

The confusing pieces of the docs:

https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes

Access scopes are the legacy method of specifying permissions for your nodes, and for workloads running on your nodes if the workloads are using application default credentials (ADC).

https://cloud.google.com/compute/docs/access/service-accounts#service_account_permissions

When you set up an instance to run as a service account, the level of access the service account has is determined by the combination of access scopes granted to the instance and IAM roles granted to the service account. You need to configure both access scopes and IAM roles to successfully set up an instance to run as a service account

https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#best_practices

Grant the instance the https://www.googleapis.com/auth/cloud-platform scope to allow full access to all Google Cloud APIs, so that the IAM permissions of the instance are completely determined by the IAM roles of the service account.

It begs the question - if scopes are considered "legacy", why do we still need them?
For now I'll go ahead and follow their best practice of supplying the cloud-platform scope to all nodes

Was this page helpful?
0 / 5 - 0 ratings