I would like to be able to use a dynamic name for the provider alias inside of a resource definition. For example:
provider "openstack" {
tenant_name = "dev"
auth_url = "http://myauthurl.dev:5000/v2.0"
alias = "internal"
}
provider "openstack" {
tenant_name = "my-tenant"
auth_url = "http://rackspace:5000/v2.0"
alias = "rackspace"
}
provider "openstack" {
tenant_name = "my-tenant"
auth_url = "http://hpcloud:5000/v2.0"
alias = "hpcloud"
}
resource "openstack_compute_instance_v2" "server" {
provider = "openstack.${var.hosting}"
}
I would then call terraform like: terraform plan -var hosting=rackspace
and it would use the openstack provider that is aliased as openstack.rackspace
This would allow me to easily toggle my single terraform config between multiple environments & providers.
This overlaps a bit with the more theoretical discussion in #1819.
There is some overlap but this request is more about using the aliased providers, not with creating the aliased providers. (Though I like the ideas so far with #1819 )
other related issues
This will be very useful indeed.
For info, it might help or you are probably already aware of that but instead of using variable you can use a file containing your variables.
I use a file variable in the following git : https://github.com/JamesDLD/terraform
is this issue fixed by https://github.com/hashicorp/terraform/pull/16379 ?
I suppose this is the first feature that I am looking for on every terraform release CHANGELOG. Since 2015, there is still no solution. :(
Hitting this brick wall in 2020
also waiting for it.
Is there some work around that the 52 of us who plus one'd 馃憤 this issue are missing?
Here is how I would do the thing in the original comment here using current Terraform language features:
variable "hosting" {
type = string
}
locals {
openstack_settings = tomap({
internal = {
tenant_name = "dev"
auth_url = "http://myauthurl.dev:5000/v2.0"
}
rackspace = {
tenant_name = "my-tenant"
auth_url = "http://rackspace:5000/v2.0"
}
hpcloud = {
tenant_name = "my-tenant"
auth_url = "http://hpcloud:5000/v2.0"
}
})
}
provider "openstack" {
tenant_name = local.openstack_settings[var.hosting].tenant_name
auth_url = local.openstack_settings[var.hosting].auth_url
}
resource "openstack_compute_instance_v2" "server" {
}
@apparentlymart, that workaround doesn't work with my use case. I create multiple azure_kubernetes_cluster instances using for_each, then wish to use multiple kubernetes providers instantiated using certificates from the AKS instances to apply resources inside the clusters. A provider supporting for_each and a dynamic alias would do the trick. If module supported for_each, I could create a workaround that way too. Alas, Terraform supports neither solution as of version 0.12.24.
The key design question that needs to be answered to enable any sort of dynamic use of provider configurations (whether it be via for_each
inside the provider
block, for_each
on a module containing a provider
block, or anything else) is how Terraform can deal with the situation where a provider configuration gets removed at the same time as the resource instances it is responsible for managing.
Using the most recent comment's use-case as an example, I think you're imaging something like this:
# This is a hypothetical example. It will not work in current versions of Terraform.
variable "clusters" {
type = map(object({
# (some suitable cluster arguments)
})
}
resource "azure_kubernetes_cluster" "example" {
for_each = var.clusters
# (arguments using each.value)
}
provider "kubernetes" {
for_each = azure_kubernetes_cluster.example
# (arguments using each.value from the cluster objects)
}
resource "kubernetes_pod" "example" {
for_each = azure_kubernetes_cluster.example
provider = provider.kubernetes[each.key]
# ...
}
The above presents to significant challenges:
var.clusters
with key "foo", Terraform must configure the provider.kubernetes["foo"]
instance in order to _plan_ to create kubernetes_pod.example["foo"]
, but it can't do so because azure_kubernetes_cluster.example["foo"]
isn't created yet. This is the problem that motivated what I proposed in #4149. Today, it'd require using -target='kubernetes_pod.example["foo"]'
on the initial create to ensure that the cluster is created first.var.clusters
, Terraform needs to configure the provider.kubernetes["bar"]
provider in order to plan and apply the destruction of kubernetes_pod.example["bar"]
. However, with the configuration model as it exists today (where for_each
works entirely from the configuration and not from the state) this would fail because provider.kubernetes["bar"]
's existence depends on azure_kubernetes_cluster.example["bar"]
's existence, which in turn depends on var.clusters["bar"]
existing, and it doesn't anymore.Both of these things seem solvable in principle, which is why this issue remains open rather than being closed as technically impossible, but at the same time they both involve some quite fundamental changes to how providers work in Terraform that will inevitably affect the behavior of other subsystems.
This issue remains unsolved not because the use-cases are not understood, but rather because there is no technical design for solving it that has enough detail to understand the full scope of changes required to meet those use-cases. The Terraform team can only work on a limited number of large initiatives at a time. I'm sorry that other things have been prioritized over this, but I do stand behind the prioritization decisions that our team has made.
In the meantime, I hope the example above helps _some_ of you who have problems like the one described in the opening comment of this issue where it is the configuration itself that is dynamic, rather than the _number_ of configurations. For those who have more complex systems where the _number_ of provider configurations is what is dynamic, my suggested workaround would be to split your configuration into two parts. Again using the previous comment as an example:
variable "clusters"
block and the single resource "azure_kubernetes_cluster"
that uses for_each = var.clusters
. This configuration will have only one default
workspace, and will create all of the EKS clusters.The second configuration contains a single provider "kubernetes"
and a single resource "kubernetes_pod"
and uses terraform.workspace
as an AKS cluster name, like this:
data "azurerm_kubernetes_cluster" "example" {
name = terraform.workspace
# ...
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.main.kube_config[0].host
# etc...
}
resource "kubernetes_pod" "example" {
# ...
}
The workflow for adding a new cluster would then be:
var.clusters
in the first configuration and run terraform apply
to create the corresponding cluster.terraform workspace new CLUSTERNAME
to establish a new workspace for the cluster you just created, and then run terraform apply
to do the Kubernetes-cluster-level configuration for it.The workflow to remove an existing cluster would be:
terraform workspace select CLUSTERNAME
to switch to the workspace corresponding to the cluster you want to destroy.terraform destroy
to deregister all of the Kubernetes objects from that cluster.terraform workspace delete CLUSTERNAME
.CLUSTERNAME
from var.clusters
and run terraform apply
to destroy that particular AKS cluster.I'm not suggesting this with the implication that it is an ideal or convenient solution, but rather as a potential path for those who have a similar problem today and are looking for a pragmatic way to solve it with Terraform's current featureset.
Thanks, @apparentlymart, for that very clear and detailed explanation. You've hit on exactly the config that I was trying to use. I haven't played with workspaces yet, but the workaround that I had already settled on was moving the kubernetes provider and its dependencies into a child module. This gives me some module invocations that I need to keep in sync with var.clusters
, but my new add/delete workflow doesn't seem much more complex than the one that you've proposed. My config looks like this now:
variable "clusters" {
type = map(object({
# (some suitable cluster arguments)
})
}
resource "azure_kubernetes_cluster" "example" {
for_each = var.clusters
# (arguments using each.value)
}
module "k8s-key1" {
source = "./k8s"
# (arguments using each.value from the key1 cluster object)
}
module "k8s-key2" {
source = "./k8s"
# (arguments using each.value from the key2 cluster object)
}
Looking at this again, I could have just moved everything into the child module, gotten rid of var.clusters
, and maintained this as two module invocations. This makes me suspect that there is more, or maybe less, here, than meets the eye:
Anyhow, given those factors, it seems to me that allowing providers to use loops and resources to use dynamically named providers shouldn't introduce any more problems than already exist in my multiple module invocation scenario. Maybe I'm missing some edge cases but, again, I think I can duplicate any such cases by invoking modules multiple times with the existing feature set.
I'm not sure I fully followed what you've been trying @derekrprice, but if you have a provider "kubernetes"
block inside your ./k8s
module then I think if you remove one of those module blocks after the resource instances described inside it have been created then you will encounter problem number 2 from my previous comment:
- When _removing_ element "bar" from
var.clusters
, Terraform needs to configure theprovider.kubernetes["bar"]
provider in order to plan and apply the destruction ofkubernetes_pod.example["bar"]
. However, with the configuration model as it exists today (wherefor_each
works entirely from the configuration and not from the state) this would fail becauseprovider.kubernetes["bar"]
's existence depends onazure_kubernetes_cluster.example["bar"]
's existence, which in turn depends onvar.clusters["bar"]
existing, and it doesn't anymore.
The addressing syntax will be different in your scenario -- module.k8s-key1.provider.kubernetes
instead of provider.kubernetes["bar"]
, for example -- but the same problem applies: there are instances in your state that belong to that provider configuration but that provider configuration is longer present in the configuration.
You aren't needing to use -target
on create here (problem number 1 from my previous comment) because the kubernetes
provider in particular contains a special workaround where it detects the incomplete configuration resulting from that situation and skips configuring itself in that case. A couple other providers do that too, such as mysql
and postgresql
. This solution doesn't generalize to all providers because it means that the provider is effectively blocked from doing any API access during planning. For mysql
and postgresql
that's of no consequence, but for Kubernetes in particular I've heard that this workaround is currently blocking the provider from using Kubernetes API features to make dry-run requests in order to produce an accurate plan.
I'm currently focused on an entirely separate project so I can't go any deeper on design discussion for this right now. My intent here was just to answer the earlier question about whether there were any known workarounds; I hope the two workarounds I've offered here will be useful for at least some of you.
Could this make it into v0.14?
Most helpful comment
Hitting this brick wall in 2020