Terraform: Kubernetes provider should be able to use output from google_container_engine

Created on 19 Mar 2017  ยท  10Comments  ยท  Source: hashicorp/terraform

Kubernetes provider was introduced in Terraform 0.9.1. It can take client_certificate, client_key and cluster_ca_certificate to create connection to the Kubernetes cluster. When the cluster is created by the google_container_engine resource, this information is available as attributes. However, it is base64 encoded, and the Kubernetes provider expects these in normal PEM format.
The Kubernetes provider should be changed to be able to accept these in base64 (or the google_container_engine resource should provide them as normal PEM strings).
A workaround would probably be to use the base64decode function for variable interpolation, but I can't seem to get that work.

bug providegoogle-cloud providekubernetes

Most helpful comment

@drzero42 I think you're right. the issue I had is that the client_certificate has DN=client set in it which resolves the identity as client which seems to not have any permissions in the cluster. The solution I found was to load a google_client_config and modify the provider to depend on the token instead of the certificate, per the docs. I ended up with something like

data "google_client_config" "default" {}

provider "kubernetes" {
  load_config_file = false

  host = "https://${data.google_container_cluster.my_cluster.endpoint}"
  token = "${data.google_client_config.default.access_token}"
  cluster_ca_certificate = "${base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}

And now all is well. It should be noted, I believe, this means that RPCs into the Kubernetes are coming as a superuser. That probably isn't a concern though since the user running terraform can probably destroy and recreate the cluster. Thanks for the response!

All 10 comments

Hi @drzero42
thanks for trying out the new K8S provider. I'm sorry to hear you're having issues.

I believe that one of the main reasons GKE resources don't interact very well with Kubernetes provider is https://github.com/hashicorp/terraform/issues/12393 (core bug). That's not something to be solved within the context of any provider - it affects other providers too. This bug would cause the K8S provider to appear as unconfigured during the apply phase - which in turn would cause error like this:

* kubernetes_namespace.n: Post http://localhost/api/v1/namespaces: dial tcp [::1]:80: getsockopt: connection refused

because the provider defaults to http://localhost if the endpoint isn't set.

The other thing you mentioned is base64 decoding of the PEM data. I believe the base64decode function should work just fine - assuming you work around the above bug by creating the GKE cluster first and have it already in the state prior to applying any kubernetes_* resources.

See this gist: https://gist.github.com/radeksimko/1a2cc98c5536bd4aa92e960ed7a47cf0

Can you provide more context/details about what isn't working for you in regards to base64decode, so we can reproduce and fix?

Thanks!

@radeksimko I don't believe the problem I am experiencing is related to #12393 since I only added the kubernetes provider setup after having GKE resources up and running with Terraform. The GCP resources in general seem fairly wonky, with amongst other things timeouts happening during provisioning of the GKE cluster, and as mentioned in #12393, the kubernetes provider not able to figure out that it needs to wait for the google provider to do it's things. However, having worked around that, I still believe that the kubernetes provider and google_container_cluster resource should agree if the certs and keys should be base64 or not.

Based on your gist, which really helps me, my problems with base64decode seems to have mainly been me not really understanding the datastructure provided by google_container_engine resources. The documentation does not really indicate that it is a list, where I need to access the first element (google_container_cluster.primary.master_auth.0.client_certificate), so I was just accessing things directly (google_container_cluster.primary.master_auth.client_certificate) and the error message was not entirely understandable to me, leading to my confusion. I thought I might have misunderstood the syntax needed for base64decode and various other internal functions. Better examples in the documentation would be awesome.
Given your gist, I should be able to work around the problem, so thanks :) I do however still believe this bug should be kept open and fixed. I do not really care how it is fixed, but having to use base64decode seems unneccesarily complex to me.

Thanks for further explanation @drzero42 , that's helpful.

I did discuss this issue with a few other maintainers in the past weeks and we more or less agreed that we could make the Kubernetes provider detect whether a base64-decodable string was passed and if not it would decode it.

Does that sound as a good solution to you?

Yes, that sounds like a decent solution :)

I know this is super old, but couldn't base64decode be used? https://www.terraform.io/docs/configuration/interpolation.html#base64decode-string-

@radeksimko, it looks like https://github.com/hashicorp/terraform/issues/12393 is closed (although maybe not fixed). Could you advise on how to consolidate these to work in a single run of terraform?

@Qix- You are correct - I already mentioned that in the original issue :)
@achew22 As far as I know, #12393 hasn't been a problem for a while. For me it works in a single run to create a GKE cluster and have the kubernetes provider create some resources in the cluster. This issue is more about convenience. Currently you can not just pass the client_certificate, client_key and cluster_ca_certificate directly, you need to use the base64decode() interpolation function on them. I think it would make sense for the google_container_cluster resource and kubernetes provider to be aligned so one can pass information directly to the other without having to manipulate them.

@drzero42 I think you're right. the issue I had is that the client_certificate has DN=client set in it which resolves the identity as client which seems to not have any permissions in the cluster. The solution I found was to load a google_client_config and modify the provider to depend on the token instead of the certificate, per the docs. I ended up with something like

data "google_client_config" "default" {}

provider "kubernetes" {
  load_config_file = false

  host = "https://${data.google_container_cluster.my_cluster.endpoint}"
  token = "${data.google_client_config.default.access_token}"
  cluster_ca_certificate = "${base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}

And now all is well. It should be noted, I believe, this means that RPCs into the Kubernetes are coming as a superuser. That probably isn't a concern though since the user running terraform can probably destroy and recreate the cluster. Thanks for the response!

Hi all! Sorry for the long silence here.

Providers are no longer developed in this repository, so we're going to close this out to reflect that. If you are still seeing trouble using the Kubernetes provider with Google Container Engine, please open an issue in the Kubernetes provider repository.

The issue that Radek mentioned above is now consolidated into #4149, which is a core-side issue representing the idea of better supporting "multi-layer" Terraform configurations where results from one provider are used to configure another. That issue remains open to track that use-case.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings