Currently, I have to use somewhat of a hack in order to have Terraform create my Kubernetes deployments and services:
# A module that can create Kubernetes resources from YAML file descriptions.
variable "username" {
description = "The Kubernetes username to use"
}
variable "password" {
description = "The Kubernetes password to use"
}
variable "server" {
description = "The address and port of the Kubernetes API server"
}
variable "configuration" {
description = "The configuration that should be applied"
}
variable "cluster_ca_certificate" {}
resource "null_resource" "kubernetes_resource" {
triggers {
configuration = "${var.configuration}"
}
provisioner "local-exec" {
command = "touch ${path.module}/kubeconfig"
}
provisioner "local-exec" {
command = "echo '${var.cluster_ca_certificate}' > ${path.module}/ca.pem"
}
provisioner "local-exec" {
command = "kubectl apply --kubeconfig=${path.module}/kubeconfig --server=${var.server} --certificate-authority=${path.module}/ca.pem --username=${var.username} --password=${var.password} -f - <<EOF\n${var.configuration}\nEOF"
}
}
I use the above module when I need to create resources, e.g.:
module "kubernetes_nginx_deployment" {
source = "./kubernetes"
server = "${module.kubernetes_cluster.host}"
username = "${module.kubernetes_cluster.username}"
password = "${module.kubernetes_cluster.password}"
cluster_ca_certificate = "${module.kubernetes_cluster.cluster_ca_certificate}"
configuration = "${file("kubernetes/nginx-deployment.yaml")}"
}
This is of course far from perfect: it doesn't support modifying or destroying the resources and is generally brittle.
It would be great if there was either first-class support for Deployment and Service resources or generic support for arbitrary Kubernetes resources through YAML or JSON definitions.
Hi @dasch! I'm looking to emulate your workaround.
Do you have to add another local-exec
provisioner in your module for every deployment you have?
So if you add another deployment in your plan, you have to go into your module and add another local provisioning step?
And how do you get "${module.kubernetes_cluster.host}"
? Where does that come from?
@RobbieMcKinstry I simply invoke the module multiple times, once for each resource. I store the Kubernetes config in YAML files and use file(...)
to load them into Terraform – you could in theory use templates, also.
I've updated the snippet to make it properly work; I use the google_container_cluster
resource to set up a Container Engine cluster that I can create the Kubernetes resources in.
Thank you! :)
Just a quick FYI in the spirit of transparency - I plan to work on kubernetes_service
as discussed in the linked PR. I do not plan to work on any resources which don't have stable API yet (i.e. marked as v1
) - which is Deployment's case.
https://kubernetes.io/docs/resources-reference/v1.6/#deployment-v1beta1-apps
I think that supporting Replication Controller should suffice until Deployment gets into stable, though. It's not the same thing, but it has similar goal.
@radeksimko ah – I do want to set up Deployments from Terraform, but I guess I have to be patient, then. I have my kubectl
based hack in the meantime.
Hi! We took some inspiration and also took advantage of the new when = "destroy"
resource "null_resource" "kubernetes_resource" {
triggers {
content = "${var.content}"
}
provisioner "local-exec" {
command = "kubectl apply --context=${var.cluster} -f - <<EOF\n${var.content}\nEOF"
}
provisioner "local-exec" {
command = "kubectl delete --context=${var.cluster} -f - <<EOF\n${var.content}\nEOF"
when = "destroy"
}
}
For this to work you have to have the context configured with the credentials in the ~/.kube/config
(probably can specify other path) and works with the destroy and modifications of the yaml.
@jbarreneche in my snippet you don't need to configure the local kube config, which is a big plus when working with multiple environments – all params are passed directly to kubectl
.
@dasch yes, but setting the password will leave it in the state. We provision the kube config file in a kubernetes secret in the environment we run the terraform (or you could put it with a configuration manager) and thus avoiding storing the password in the terraform state file.
Anyway, the destroy is another usefult addition :)
A quick update - kubernetes_service
was just merged and will be available in the next release (0.9.6).
Other resources including kubernetes_replication_controller
will follow soon.
Deployments appear to be in stable api under apps/v1 as of version 1.9, any plans to begin supporting deployments in terraform?
@radeksimko Any update on this? This would be extremely useful.
@austinkelleher Have a look at this fork: https://github.com/sl1pm4t/terraform-provider-kubernetes
@Phylu I've seen this fork and plan to try it out, but I'd like to see this have official support in Terraform rather than using a fork. This feature is no longer in beta and there is no reason why Terraform shouldn't support it.
+1 would be awesome to have this
+1 and a bump
+1
+1
+1
+1
Please do not continue to add +1.
It's considered poor behavior to comment +1s multiple times, flooding the maintainers with notifications. Consider using a reaction instead to voice your priorities instead.
Seeing as this is a closed issue related to pre-stabilized deployments, and there is currently an open issue for this feature request, I'm going to lock this conversation.
If anyone on the TF team feels they should reopen, be my guest :)
Most helpful comment
Deployments appear to be in stable api under apps/v1 as of version 1.9, any plans to begin supporting deployments in terraform?