Terraform v0.7.3
resource "aws_elasticache_cluster" "memcached" {
cluster_id = "foo"
engine = "memcached"
num_cache_nodes = 3
node_type = "cache.t2.micro"
}
output "node_addresses" {
value = ["${aws_elasticache_cluster.memcached.cache_nodes.*.address}"]
}
I get a list of 3 node addresses as an output.
I get nothing.
terraform applyIf I change the output to a hard-coded item, it works correctly:
output "node_addresses" {
value = "${aws_elasticache_cluster.memcached.cache_nodes.0.address}"
}
But this obviously doesn't work if the number of cache nodes is controlled by a variable. I tried to work around it in various ways, but it looks like cache_nodes isn't a proper list. All the typical list operations, like join, split, and element do not work on it. Therefore, I can't find any way to dynamically return a list of node addresses or ids or to pass them onto another module (e.g. a module that adds CloudWatch alarms to each node).
I am seeing this too, trying to get a list of addresses however ${join(",", aws_elasticache_cluster.main.cache_nodes.*.address)} appears to be empty
+1 we also had this problem.
The current workaround for us is using a template file, but it relies on the underlying format of AWS configuration endpoint in conjunction with the individual node endpoints which is pretty hacky:
resource "aws_elasticache_cluster" "main" {
cluster_id = "${var.cluster_id}"
engine = "${var.engine}"
...
}
resource "template_file" "hosts" {
count = "${var.node_count}"
template = "${file("${path.module}/template.tpl")}"
vars {
endpoint = "${aws_elasticache_cluster.main.configuration_endpoint}"
count = "${format("%04d", count.index + 1)}"
}
}
/**
* Outputs.
*/
output "node_endpoints" { value = "${join(",", template_file.hosts.*.rendered)}" }
output "endpoint" { value = "${aws_elasticache_cluster.main.configuration_endpoint}" }
And then the template file:
${replace("${endpoint}", "cfg", "${count}")}
This is a dup of #9080 it looks like, or even potentially #8695 which was just fixed.
Woot, thank you! 🎉
@mitchellh we're still seeing the problem occur, should we re-open?
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@mitchellh we're still seeing the problem occur, should we re-open?