Terraform v0.11.2
+ provider.external v1.0.0
+ provider.null v1.0.0
These are defined inside a module called consul.
module "A" {
source = "./region"
region = "RegionA"
environment = "${var.environment}"
project-name = "${var.project-name}"
number-of-servers = "${var.number-of-servers}"
}
module "T" {
source = "./region"
region = "RegionT"
environment = "${var.environment}"
project-name = "${var.project-name}"
number-of-servers = "${var.number-of-servers}"
}
locals {
both-regions-server-ips = "${concat(module.A.server-ips, module.T.server-ips)}"
both-regions-server-fqdns = "${concat(module.A.server-fqdns, module.T.server-fqdns)}"
}
resource "null_resource" "provision-both-clusters" {
count = "${var.number-of-servers * 2}" # * 2 Because we have RegionA and RegionT
connection {
host = "${element(local.both-regions-server-ips, count.index)}"
user = "${var.ssh-username}"
private_key = "${file(".${local.keypair}.pem")}"
}
provisioner "chef" {
# Omitted for clarity
}
}
data "external" "vault-tokens" {
depends_on = ["null_resource.provision-both-clusters"]
program = ["${path.module}/create-vault-tokens.sh"]
query = {
consulAddress = "https://${element(module.tagus.server-ips, 0)}:8500"
masterToken = "${var.master-token}"
}
}
output "vault-backend-token" {
value = "${data.external.vault-tokens.result.backend-token}"
sensitive = true
}
output "vault-consul-auth-backend-token" {
value = "${data.external.vault-tokens.result.consul-auth-backend-token}"
sensitive = true
}
The output variables values should only be computed after provision-both-clusters in ran.
Error: Error running plan: 2 error(s) occurred:
* module.consul.output.vault-consul-auth-backend-token: Resource 'data.external.vault-tokens' does not have attribute 'result.consul-auth-backend-token' for variable 'data.external.vault-tokens.result.consul-auth-backend-token'
* module.consul.output.vault-backend-token: Resource 'data.external.vault-tokens' does not have attribute 'result.backend-token' for variable 'data.external.vault-tokens.result.backend-token'
The same behavior happens even if I define vault-tokens to be:
data "external" "vault-tokens" {
program = ["${path.module}/create-vault-tokens.sh"]
query = {
consulAddress = "https://${element(module.tagus.server-ips, 0)}:8500"
masterToken = "${var.master-token}"
dummy = "${null_resource.provision-both-clusters.count}"
}
}
Like described in #10603
If I remove the output variables Terraform resolves the graph correctly and only runs the vault-tokens after the null_resource.
If I set a depends_on = ["null_resource.provision-both-clusters"] on both output variables I still get the error.
terraform initterraform planIf I change the output variables to be:
output "vault-backend-token" {
value = "${data.external.vault-tokens.result["backend-token"]}"
sensitive = true
}
output "vault-consul-auth-backend-token" {
value = "${data.external.vault-tokens.result["consul-auth-backend-token"]}"
sensitive = true
}
It works.
If I change to:
output "vault-backend-token" {
value = "${data.external.vault-tokens.result.backend-token}"
sensitive = true
}
output "vault-consul-auth-backend-token" {
value = "${data.external.vault-tokens.result.consul-auth-backend-token}"
sensitive = true
}
It does not work, which is unexpected.
FWIW, I can reproduce this on v0.11.7 while having a bash script as an external data source. Changing to the square brackets notation from dot, does indeed make the error go away.
I just reproduced this issue with Terraform v0.11.10. Thanks for the hint with the brackets, @Lasering. That worked as a workaround.
To work around this, you can also use lookup():
output "vault-backend-token" {
value = "${lookup(data.external.vault-tokens.result, "backend_token")}"
sensitive = true
}
https://github.com/terraform-providers/terraform-provider-external/issues/4
Hi,
I have an issue along these lines where I am using an external provider (a bash script) to create an SSL certificate with the common name based on the dns_name attribute of an aws_alb:
data "external" "ssl_webcert_generator" {
program = ["bash", "${path.module}/externalproviders/generatessl.sh"]
query = {
cn = "${aws_alb.ecs-load-balancer.dns_name}"
pass = "${var.sslpass}"
}
}
resource "aws_iam_server_certificate" "lb_cert" {
depends_on = ["data.external.ssl_webcert_generator"]
name = "lb_cert"
certificate_body = "${data.external.ssl_webcert_generator.result.public_cert_contents}"
private_key = "${data.external.ssl_webcert_generator.result.private_cert_contents}"
}
This gives an error on terraform plan:
aws_iam_server_certificate.lb_cert: Resource 'data.external.ssl_webcert_generator' does not have attribute 'result.public_cert_contents' for variable 'data.external.ssl_webcert_generator.result.public_cert_contents'
If i comment the aws_iam_server_certificate resource out, the stack can be launched.
Once the stack is launched the aws_iam_server_certificate resource can be uncommented, the config can be reapplied and the stack updates with this resource
Any ideas on how i can the stack to launch in one step without this workaround?
It should work like this:
resource "aws_iam_server_certificate" "lb_cert" {
depends_on = ["data.external.ssl_webcert_generator"]
name = "lb_cert"
certificate_body = "${data.external.ssl_webcert_generator.result["public_cert_contents"]}"
private_key = "${data.external.ssl_webcert_generator.result["private_cert_contents"]}"
}
@akumadare why not use https://www.terraform.io/docs/providers/tls/r/locally_signed_cert.html?
Thanks both. I hadnt seen the support for locally signed certs in the tls provider - it would make much more sense to use this!
As an FYI, what @Lasering said in https://github.com/hashicorp/terraform/issues/17173#issuecomment-360119040 happens for EKS auth too when using modules.
Until https://github.com/terraform-providers/terraform-provider-aws/pull/4904 is merged a workaround proposed in https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161 needs to be used to use the kubernetes provider with an AWS EKS cluster.
I initially had everything in a single folder, without any modules:
data "external" "aws_iam_authenticator" {
program = ["sh", "-c", "aws-iam-authenticator token -i ${var.cluster_name} | jq -r -c .status"]
}
provider "kubernetes" {
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.external.aws_iam_authenticator.result.token}"
load_config_file = false
}
As soon as I moved to a module, I had to switch to ${data.external.aws_iam_authenticator.result["token"]} instead of ${data.external.aws_iam_authenticator.result.token} otherwise I would get an Resource 'data.external.aws_iam_authenticator' does not have attribute 'result.token' for variable 'data.external.aws_iam_authenticator.result.token' error:
data "external" "aws_iam_authenticator" {
program = ["sh", "-c", "aws-iam-authenticator token -i ${var.cluster_name} | jq -r -c .status"]
}
data "aws_region" "current" {}
provider "kubernetes" {
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
# This can actually be
# token = "${data.external.aws_iam_authenticator.result.token}"
# but that fails due to... nobody knows why
# see https://github.com/hashicorp/terraform/issues/17173
token = "${data.external.aws_iam_authenticator.result["token"]}"
load_config_file = false
}
This is... weird.
I am getting a strikingly similar issue to this. I have multiple null resources that run as a linear stream of dependencies, but they all seem to ignore the dependencies and just run at once. v0.11.13
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
If I change the output variables to be:
It works.
If I change to:
It does not work, which is unexpected.