Terraform: Depends_on for module

Created on 11 Mar 2015  ·  133Comments  ·  Source: hashicorp/terraform

Possible workarounds

For module to module dependencies, this workaround by @phinze may help.

Original problem

This issue was promoted by this question on Google Groups.

Terraform version: Terraform v0.3.7

I have two terraform modules for creating a digital ocean VM and DNS records that are kept purposely modular so they can be reused by others in my organisation.

I want to add a series of provisioners using local_exec after a VM has been created and DNS records made.

Attempted solution

I tried adding a provisioner directly to my terraform file (i.e. not in a resource) which gave an error.

I then tried using the null_resource which worked but was executed at the wrong time as it didn't know to wait for the other modules to execute first.

I then tried adding a depends_on attribute to the null resource using a reference to a module but this doesn't seem to be supported using this syntax:

depends_on = ["module.module_name"]

Expected result

Either a way for a resource to depend on a module as a dependency or a way to "inject" (for lack of a better word) some provisioners for a resource into a module without having to make a custom version of that module (I realise that might be a separate issue but it would solve my original problem).

Terraform config used

# Terraform definition file - this file is used to describe the required infrastructure for this project.

# Digital Ocean provider configuration

provider "digitalocean" {
    token = "${var.digital_ocean_token}"
}


# Resources

# 'whoosh-dev-web1' resource

# VM

module "whoosh-dev-web1-droplet" {
    source = "github.com/antarctica/terraform-module-digital-ocean-droplet?ref=v1.0.0"
    hostname = "whoosh-dev-web1"
    ssh_fingerprint = "${var.ssh_fingerprint}"
}

# DNS records (public, private and default [which is an APEX record and points to public])

module "whoosh-dev-web1-records" {
    source = "github.com/antarctica/terraform-module-digital-ocean-records?ref=v0.1.1"
    hostname = "whoosh-dev-web1"
    machine_interface_ipv4_public = "${module.whoosh-dev-web1-droplet.ip_v4_address_public}"
    machine_interface_ipv4_private = "${module.whoosh-dev-web1-droplet.ip_v4_address_private}"
}


# Provisioning (using a fake resource as provisioners can't be first class objects)

# Note: The "null_resource" is an undocumented feature and should not be relied upon.
# See https://github.com/hashicorp/terraform/issues/580 for more information.

resource "null_resource" "provisioning" {

    depends_on = ["module.whoosh-dev-web1-records"]

    # This replicates the provisioning steps performed by Vagrant
    provisioner "local-exec" {
        command = "ansible-playbook -i provisioning/development provisioning/bootstrap-digitalocean.yml"
    }
}
core enhancement thinking

Most helpful comment

Just wanted to mention that while we don't yet support whole-module dependencies, there's nothing stopping you from wiring an output of one module into an input of another, which will effectively draw a dependency between the appropriate resources.

I'm not saying this necessarily will solve all use cases, but when I was trying to figure out why I haven't bumped into the need for depends_on = ['module.foo'], I realized that this is what I tend to do in my config.

All 133 comments

+1 I came here to ask for this very same thing

:+1: It's not just about adding depends_on to module DSL, but also fixing existing implementation of depends_on on raw resources so it accepts modules as dependencies.

I'm not sure if @felnne covered this as he doesn't have it in his example, but it would also be awesome of modules can depend on other modules

module "whoosh-dev-web1-records" {
    depends_on ['module.module_name]
    source = "github.com/antarctica/terraform-module-digital-ocean-records?ref=v0.1.1"
    hostname = "whoosh-dev-web1"
    machine_interface_ipv4_public = "${module.whoosh-dev-web1-droplet.ip_v4_address_public}"
    machine_interface_ipv4_private = "${module.whoosh-dev-web1-droplet.ip_v4_address_private}"
}

This way, we could bring up infrastructure in the correct order. At the moment, if something like consul depends on say, DNS, we can only have depends within a module itself. This way we can better ensure that services come up in the right order

+1 agree. Need to have modules that can be re-used atomically with parents connecting the dependency. This modularity enables easier testing, isolation, understandability... all the benefits that code gets from having module packages.

For example (with my best text art):

  # module folders
  ./consul_project/module-a/             # ability to reference output vars of module-common
  ./consul_project/module-b/             # ability to reference output vars of module-common
  ./consul_project/module-common/        # common to both a and b

  # parent
  ./consul_project/deploy-a-only.tf      # has module definitions for both module-a and module-common
  ./consul_project/deploy-b-only.tf      # likewise, but for module-b
  ./consul_project/deploy-all.tf         # defines all 3 modules

  % terraform plan deploy-b-only

+1

I just got really excited thinking depends_on would help me find a workaround to my cyclical dependency issue in #1637.. but then I remembered this issue. -_-

What about the other way around? Having a module call be dependent on some other resource.

:+1: just ran into this - have a module which creates an ASG, and I need it to depend on the user-data template.

Just wanted to mention that while we don't yet support whole-module dependencies, there's nothing stopping you from wiring an output of one module into an input of another, which will effectively draw a dependency between the appropriate resources.

I'm not saying this necessarily will solve all use cases, but when I was trying to figure out why I haven't bumped into the need for depends_on = ['module.foo'], I realized that this is what I tend to do in my config.

:+1:

Did anyone find a workaround?
I packaged up a bunch of common instance bootstrap stuff in a module to reduce repetition, but that module has some params that use interpolated values. Ideally, the module instance would depend on all of the interpolated items. In lieu of that, manual depends_on would work.

module "notifications" {
  source = "../pm_instance"
  ...
  attributes=<<EOF
"notifications":{
    "sens_queue": "${aws_sqs_queue.notifications.arn}",
}
EOF
}

Please add this feature, so it gives clear opportunity to use output variables of one module in another one.
In my case I wanted to create cluster with docker containers on digital ocean, so I have one module to create N servers, then in this module I was forced to use "null_resource" hack for provisioning, because I need to know IP of a resource to generate certificates, so docker daemon can be used from remote. Another module is used to start containers on created servers, so, obviously, also requires IPs of servers.
Ended up applying first module alone, then adding second module to config, then run terraform apply again.
So when I tried to terraform destroy of course everything crashed :(
I think this feature would be very useful to create complex architectures.

crash log: https://gist.github.com/youanswer/8ebdcd81aea9edc91f88
my structure: https://gist.github.com/youanswer/bc1ca37773df968038a8

+1

:+1:

I forgot about modules not supporting depends_on, and thought I could use this as a way to work around #2939.

+1

+1

I'd just like to throw out another example of where this would be nice:

We have a services module that creates bastions and NATs in a VPC. We lock down port 22 so that only the bastion hosts can be accessed from the internet, but in our bastion module we open port 22 ingress over the entire VPC for connections coming from the bastion hosts.

Our NAT module takes as input the bastion host IP (bastion module output) for chef provisioning connections in an aws_instance, but there's no dependency between the NAT aws_instance resources and the security group rule that allows port 22 ingress from the bastions, so terraform will try to build the NAT instances before creating the SG rule that allows us to establish an SSH connection through the bastion.

Hope that wasn't too rambly. In theory I could probably make the bastion module's output format the CIDR blocks from the aws_security_group_rule, but having whole module dependencies would be far nicer.

+1

I've got a situation where one module uploads a number of artifacts to s3, while another module starts up instances that expect the artifacts to exist on s3 and pull them down during boot. As of now, with two discrete modules, there doesn't appear to be a way to tell terraform not to start the instance module until the s3 module has completed its uploads.

+1

@roboll, you might benefit from the -target attr for the apply and plan subcommands.

Yup. Trying to create an IAM Instance Profile for a machine, and then trying to create my aws instance with a custom terraform puppet aws instance module we've created. Can't attach an IAM Instance Profile to it because it depends_on the IAM Instance Profile being created in the root file. Since there is no way to specify depends_on via an input var for the module I am stuck.

+1

+1

+1

+1 hit the same thing

+1

+1 bitten by the same

+1

Yes please!

+1

+1

+1

+1

+1 Here. My use case : a "Network" module which create AWS VPC and an VPC-Limited IAM Account with access key that is used by my "infra" module which create the EC2 instances, security groups, etc,etc.

The "infra" module should wait for "network" module since "infra" is using "network" outputs.

module "network" {
        source = "./network"
        aws_account_number = "${var.aws_master_account_number}"
        aws_access_key = "${var.aws_master_access_key}"
        aws_secret_key = "${var.aws_master_secret_key}"
        env = "infra"
        network = "10.0.0.0/16"
        aws_region = "us-east-1"
}

module "infra" {
        source = "./infra"
        aws_access_key = "${module.network.vpc_access_key}"
        aws_secret_key = "${module.network.vpc_secret_key}"
        public_key = "${file(var.public_key)}"
        private_key = "${file(var.private_key)}"
        network = "${cidrsubnet(module.network.vpc_network, 8, 1)}"
        vpc = "${module.network.vpc}"
        profile = "${module.network.vpc_profile}"
        aws_region = "${module.network.vpc_region}"
        aws_ami = "${lookup(var.aws_amis, module.network.vpc_region)}"
}

++ 1

@theredcat I thought your example will implicitly create the dependency and you should have any issues. Are you saying that it isn't working?

My understanding is that by using module.network.<output>, Terraform will create a dependency between the modules and you won't need to declare an explicit depends_on

+1

I'm trying to break apart a cluster plan and one module needs to produce its outputs before the next module can fire -- right now I cannot achieve this due to the lack of depends_on support in modules

+1

I've created a security group within a module and provided the id via an output in that module. After creating the module TF shows that the additional rules should be applied. After those are applied TF thinks they need to be removed. I think this is a good example of why this would be useful.

Consider the module:

# modules/cluster/main.tf
variable "vpc_name" {}
variable "vpc_id" {}
variable "subnet_id" {}
variable "sg_name" {}

output "security_group_id" {
  value = "${aws_security_group.cluster.id}"
}

# Create a security group for the cluster
resource "aws_security_group" "cluster" {
    name = "${var.sg_name}"
    description = "Allow SSH from world and internal traffic"
    vpc_id = "${var.vpc_id}"

    # Allow inbound traffic from all sources for SSH
    # TODO We should lock this down
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]  // Anywhere
    }

    # Allow all internal traffic
    ingress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = [
            "10.1.0.0/16"  # TODO This should be a var
        ]
    }

    # Allow outbound traffic to all destinations / all protocols
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]  // Anywhere
    }

    tags {
        Name = "${var.vpc_name}-sg"
        TF = "yes"
    }
}

And the use of that module and adding 2 additional rules:

# env/sanbox-1/main.tf
module "sandbox-cluster-1" {
    source = "../../modules/cluster"
    vpc_id = "${module.sandbox-vpc.vpc_id}"
    subnet_id = "${module.sandbox-vpc.cluster_subnet_id}"
    vpc_name = "sandbox-sciops-caching-vpc"
    sg_name = "sandbox-sciops-caching-sg"
    ami = "ami-d440a6e7" # centos us west
}

# Add rules for web traffic
resource "aws_security_group_rule" "allow-http" {
    type = "ingress"
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

    security_group_id = "${module.sandbox-cluster-1.security_group_id}"
}

resource "aws_security_group_rule" "allow-https" {
    type = "ingress"
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

    security_group_id = "${module.sandbox-cluster-1.security_group_id}"
}

I run apply twice and the world looks correct. If I run terraform plan again I get the following output (undoing the 2 additional rules):

~ module.sandbox-cluster-1.aws_security_group.cluster
    ingress.#:                            "4" => "2"
    ingress.2214680975.cidr_blocks.#:     "1" => "0"
    ingress.2214680975.cidr_blocks.0:     "0.0.0.0/0" => ""
    ingress.2214680975.from_port:         "80" => "0"
    ingress.2214680975.protocol:          "tcp" => ""
    ingress.2214680975.security_groups.#: "0" => "0"
    ingress.2214680975.self:              "0" => "0"
    ingress.2214680975.to_port:           "80" => "0"
    ingress.2541437006.cidr_blocks.#:     "1" => "1"
    ingress.2541437006.cidr_blocks.0:     "0.0.0.0/0" => "0.0.0.0/0"
    ingress.2541437006.from_port:         "22" => "22"
    ingress.2541437006.protocol:          "tcp" => "tcp"
    ingress.2541437006.security_groups.#: "0" => "0"
    ingress.2541437006.self:              "0" => "0"
    ingress.2541437006.to_port:           "22" => "22"
    ingress.2617001939.cidr_blocks.#:     "1" => "0"
    ingress.2617001939.cidr_blocks.0:     "0.0.0.0/0" => ""
    ingress.2617001939.from_port:         "443" => "0"
    ingress.2617001939.protocol:          "tcp" => ""
    ingress.2617001939.security_groups.#: "0" => "0"
    ingress.2617001939.self:              "0" => "0"
    ingress.2617001939.to_port:           "443" => "0"
    ingress.714645596.cidr_blocks.#:      "1" => "1"
    ingress.714645596.cidr_blocks.0:      "10.1.0.0/16" => "10.1.0.0/16"
    ingress.714645596.from_port:          "0" => "0"
    ingress.714645596.protocol:           "-1" => "-1"
    ingress.714645596.security_groups.#:  "0" => "0"
    ingress.714645596.self:               "0" => "0"
    ingress.714645596.to_port:            "0" => "0"

If this isn't relevant here maybe I need to open a new issue proposing that TF resolve all SG rules before determining what changes should be made?

@leetrout the issue you have is a special note at the top of the documentation for the security group resource (see https://www.terraform.io/docs/providers/aws/r/security_group.html). Specifically, you can have either inline SG rules, or make use of aws_security_group_rule, but not both.

What you should do to resolve your issue currently is to use aws_security_group_rules instead of having inline rules in your module. The general rule of thumb is to do so if you're ever "exporting" a security group from a module.

LOL that's in a nice big important box too. I think I totally did not grok that (clearly). Makes sense to use the explicit rule resources in my module. Thanks for the quick reply, too.

Derp. :flushed:

My use case should be supported OOTB but oddly, it isn't (or it's a different bug I'm not seeing).

module "prepare_some_package" {
    source = "..."
    vars { path = "../artifacts" }
}

module "upload_that_package" {
    source = "..."
    vars { path = "${module.prepare_some_package.path}" }
}

Since upload_that_package interpolates to prepare_some_package, one would think that Terraform would know about the order, but it doesn't.

First run result:

Error applying plan:
1 error(s) occurred:
* aws_lambda_function.function: open ../artifacts/code-package.zip: The system cannot find the file specified.

Second run works since the file is already on disk.

@johnrengelman Indeed, this is not working. The graph show two distinct clusters, one for each module

+1

+1

+1; just discovered that i cannot have a module as a dependency. :(

:+1:

:+1:

This would be tremendously useful!

+1

Use-case:

  • A module to create network infrastructure (VPC, Subnets, Security Groups, ...)
  • Other modules to define application related resources that depend on the network infrastructure

comment-105613781 (using input/output cars for indirect dependency) does not seem to solve. For example, I have a module with output var 'file':

module "get_remote_file" {
    source = "../tf_get_remote_file"
    url = "${var.saml_metadata_url}"
    file_name = "idp-metadata.xml"
}
resource "aws_iam_saml_provider" "cas" {
    name = "${var.idp_name}"
    saml_metadata_document = "${file("${module.get_remote_file.file}")}"
}

When I try to tf plan, I get an error the the file does not exist:

Error running plan: 1 error(s) occurred:
file: open idp-metadata.xml: no such file or directory in:

${file("${module.get_remote_file.file}")}

But if I run 'get_remote_file' by itself it downloads the file just fine - the module and the resource are being ran parallel and the implied dependency is not present.

+1

+1

@tomdavidson So I'm guessing that module outputs the var before it executes the get on the url... which is a different problem - where by the outputs are generated near immediately instead of waiting for the module or plan to compute fully (#5799)

You should probably null_resource (resource "null_resource" "get_file") the file get, use module re-direction to make an output on the downloaded file name, and then use your resource "cas" with a depends_on = ["null_resource.get_file"]. I made tf_filemodule to solve the file resourcing issues for created files, but anything that depends on them must depend on the origin resource, since there's no module dependency which would make your method work.

Allowing a module in depends_on would solve a lot of problems.

+1 (module dependency) - ran into this yesterday

OMG +1

+1 this is painful working with modules and no dependency capability.

FYI - HC is pretty aware of this issue; last i was told that 0.7 TF might have this depends_on feature for modules. Here's hoping.

I was able to find a workaround for the problem of depending modules (one module needs to completely finish before the other begins).
It is kind of a dirty solution but should work in the absence of a real one.

To show the solution i made a small test with 2 instances of a module:
test.tf file:

variable "aws_access_key" {}
variable "aws_secret_key" {}

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "eu-central-1"
}

module "test1" {
  source = "./testmodule/"
  ami = "ami-98043785"
  type = "t2.small"
  admin_key_id = "test-admin-key"
  sg = "sg-f9ebf190"
  subnet = "subnet-96eac7ff"
  name = "test1"
  depends_id = ""
}

module "test2" {
  source = "./testmodule/"
  ami = "ami-98043785"
  type = "t2.small"
  admin_key_id = "test-admin-key"
  sg = "sg-f9ebf190"
  subnet = "subnet-96eac7ff"
  name = "test2"
  depends_id = "${module.test1.depends_id}"
}

testmodule/testmodule.tf file:

variable "ami" {}
variable "type" {}
variable "admin_key_id" {}
variable "sg" {}
variable "subnet" {}
variable "name" {}
variable "depends_id" {}

# Create instances
resource "aws_instance" "instance" {
    ami = "${var.ami}"
    instance_type = "${var.type}"
    key_name = "${var.admin_key_id}"
    security_groups = ["${var.sg}"]
    subnet_id = "${var.subnet}"
    tags { Name = "${var.name}"
           Terraform = true
           Depends_id = "${var.depends_id}" }
}

resource "null_resource" "dummy_dependency" {
  depends_on = ["aws_instance.instance"]
}

output "depends_id" { value = "${null_resource.dummy_dependency.id}" }

Inside the module the null_resource is set up as a "last" resource using explicit dependency. You can add whatever intermodule dependencies to it making this possible (if more resources are involved). By outputting the id of the null_resource and using it for example in the tag of the next module call i create an implicit dependency.

By adding other provisioners etc into the module and adding explicit dependencies between them it is possible to guarantee all resources are created/executed in the correct time.

Plan of my test project:

KristjanEliass-MacBook-Pro:terraform_testing kristjan$ terraform plan
Refreshing Terraform state prior to plan...


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ module.test1.aws_instance.instance
    ami:                        "" => "ami-98043785"
    availability_zone:          "" => "<computed>"
    ebs_block_device.#:         "" => "<computed>"
    ephemeral_block_device.#:   "" => "<computed>"
    instance_state:             "" => "<computed>"
    instance_type:              "" => "t2.small"
    key_name:                   "" => "test-admin-key"
    placement_group:            "" => "<computed>"
    private_dns:                "" => "<computed>"
    private_ip:                 "" => "<computed>"
    public_dns:                 "" => "<computed>"
    public_ip:                  "" => "<computed>"
    root_block_device.#:        "" => "<computed>"
    security_groups.#:          "" => "1"
    security_groups.3486488308: "" => "sg-f9ebf190"
    source_dest_check:          "" => "1"
    subnet_id:                  "" => "subnet-96eac7ff"
    tags.#:                     "" => "3"
    tags.Depends_id:            "" => ""
    tags.Name:                  "" => "test1"
    tags.Terraform:             "" => "1"
    tenancy:                    "" => "<computed>"
    vpc_security_group_ids.#:   "" => "<computed>"

+ module.test1.null_resource.dummy_dependency

+ module.test2.aws_instance.instance
    ami:                        "" => "ami-98043785"
    availability_zone:          "" => "<computed>"
    ebs_block_device.#:         "" => "<computed>"
    ephemeral_block_device.#:   "" => "<computed>"
    instance_state:             "" => "<computed>"
    instance_type:              "" => "t2.small"
    key_name:                   "" => "test-admin-key"
    placement_group:            "" => "<computed>"
    private_dns:                "" => "<computed>"
    private_ip:                 "" => "<computed>"
    public_dns:                 "" => "<computed>"
    public_ip:                  "" => "<computed>"
    root_block_device.#:        "" => "<computed>"
    security_groups.#:          "" => "1"
    security_groups.3486488308: "" => "sg-f9ebf190"
    source_dest_check:          "" => "1"
    subnet_id:                  "" => "subnet-96eac7ff"
    tags.#:                     "" => "<computed>"
    tenancy:                    "" => "<computed>"
    vpc_security_group_ids.#:   "" => "<computed>"

+ module.test2.null_resource.dummy_dependency


Plan: 4 to add, 0 to change, 0 to destroy.

Apply of my test project:

KristjanEliass-MacBook-Pro:terraform_testing kristjan$ terraform apply
module.test1.aws_instance.instance: Creating...
  ami:                        "" => "ami-98043785"
  availability_zone:          "" => "<computed>"
  ebs_block_device.#:         "" => "<computed>"
  ephemeral_block_device.#:   "" => "<computed>"
  instance_state:             "" => "<computed>"
  instance_type:              "" => "t2.small"
  key_name:                   "" => "test-admin-key"
  placement_group:            "" => "<computed>"
  private_dns:                "" => "<computed>"
  private_ip:                 "" => "<computed>"
  public_dns:                 "" => "<computed>"
  public_ip:                  "" => "<computed>"
  root_block_device.#:        "" => "<computed>"
  security_groups.#:          "" => "1"
  security_groups.3486488308: "" => "sg-f9ebf190"
  source_dest_check:          "" => "1"
  subnet_id:                  "" => "subnet-96eac7ff"
  tags.#:                     "" => "3"
  tags.Depends_id:            "" => ""
  tags.Name:                  "" => "test1"
  tags.Terraform:             "" => "1"
  tenancy:                    "" => "<computed>"
  vpc_security_group_ids.#:   "" => "<computed>"
module.test1.aws_instance.instance: Creation complete
module.test1.null_resource.dummy_dependency: Creating...
module.test1.null_resource.dummy_dependency: Creation complete
module.test2.aws_instance.instance: Creating...
  ami:                        "" => "ami-98043785"
  availability_zone:          "" => "<computed>"
  ebs_block_device.#:         "" => "<computed>"
  ephemeral_block_device.#:   "" => "<computed>"
  instance_state:             "" => "<computed>"
  instance_type:              "" => "t2.small"
  key_name:                   "" => "test-admin-key"
  placement_group:            "" => "<computed>"
  private_dns:                "" => "<computed>"
  private_ip:                 "" => "<computed>"
  public_dns:                 "" => "<computed>"
  public_ip:                  "" => "<computed>"
  root_block_device.#:        "" => "<computed>"
  security_groups.#:          "" => "1"
  security_groups.3486488308: "" => "sg-f9ebf190"
  source_dest_check:          "" => "1"
  subnet_id:                  "" => "subnet-96eac7ff"
  tags.#:                     "" => "3"
  tags.Depends_id:            "" => "4833549553111269374"
  tags.Name:                  "" => "test2"
  tags.Terraform:             "" => "1"
  tenancy:                    "" => "<computed>"
  vpc_security_group_ids.#:   "" => "<computed>"
module.test2.aws_instance.instance: Creation complete
module.test2.null_resource.dummy_dependency: Creating...
module.test2.null_resource.dummy_dependency: Creation complete

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

:+1:

+1. Really need this.

👍

+1

+1

+1

+1

+1

+1

+1

I think if the method described by @phinze back on March 26 (where you can create the dependency through the output of module A being used as input to module B) actually created the dependency my use case would be handled but that is not the result that I'm seeing.

It seems like that is working between the networking layer items (VPC, subnets, etc) and instances where they wait for the parent networking items to be created. However, when the two modules in question are creating an aws_instance within each with one depends on the other, that is when TF doesn't seem to know that there needs to be an order to it. At least that appears to be the case in my situation.

I will say, though, that the null_resource workaround posted by @kristjanelias is working for my situation so thank you very much for that. Not sure I understand exactly _why_ it's working but for today I'll accept just that it is.

The resource dependencies get tracked individually, so setting an output variable in module A to be used by module B only works for the resources in module B that actually use that variable. Having a top level 'depends_on' would simplify that.

:+1:

Just curious, is this on the roadmap? It would make my configuration significantly simpler and easier to understand. Thanks!

I ended up implementing something very similar to what @kristjanelias described. Ours is a very non-trivial config and has several separate modules for the VPC itself, some IAM rules, S3 buckets, a few autoscaling groups and so forth.

Sure, a one-liner with a Terraform-sanctioned depends_on would be ideal if we could get it in the roadmap, but his method so far seems to be working well enough for our needs.

+1

Has anybody made the "null_resource" "dummy_dependency" workaround work with dependencies between different modules??

@kristjanelias any idea on how that could work??

@blackjid I did this with my TF plans when doing a large consolidated chef stack.

I achieve this by passing in an output from another module that hopefully runs near or at the very end into a 'wait_on' input. this wait_on input simply echos the input.

Example:

module "blah" {
  foo = "bar"
  ...
}
module "eww" {
  foo = "baz"
  ...
  wait_on = "${module.blah.someoutput}"
}

In the module "eww" I parse that wait_on (default there as well so I don't require it) and push that to a null_resource:

resource "null_resource" "waited_on" {
  resource "local-exec" {
    command = "echo 'Waited for ${wait_on} to complete"
  }
}

While it does work... I really feel that something as simple as depends_on or some other verb such as when should apply here. The use cases for them are different, however the flow control is very necessary to draft one plan that can handle more than one scenario. You can see my work on my repo tf_chef_stack, which calls several of my other plans.

@kristjanelias

your "dirty workaround" looks pretty clean to me. It works for my use case. Thanks.

+1 would be really great to have depends_on = ["module.module_name"] feature

++1

+1

+1 and @kristjanelias solution works!

+1

I'm about to abandon Terraform over this.

I have 3 modules, they are set up in the following order:
1) Create Bastion Server
2) Create Consul Server
3) Create Docker ECS Cluster

This should be pretty freaking simple.
Step 3 relies on both a security group name and an IP address for the consul agent to join.

It builds a launch configuration using the security group name, which ends up saying COMPUTED in the plan and never gets put into AWS correctly (it gets put in as Default)

It also has a user data script that gets created, but it errors out saying "unknown variable accessed" for the variable the file_template is trying to access, even though it matches up perfectly.

If I manually assign both values in the 3rd module, the script runs fine.
I've tried using the workarounds above, neither of them work.

This issue has been open for a YEAR AND A HALF, and has nearly 70 +1s on them.
Does anyone actually prioritize fixes? Because this prevents modules from being used in any reasonable scenario... (Seriously, after spending nearly two days troubleshooting this and still having so solution, to find this ticket sitting here this long, I'm really holding back)

@dguisinger, are your modules using outputs wired to inputs for other modules? How are your modules structured? Are they online where they can be reviewed?

I ask because, while I would like to see depends_on for modules, and I'm sure there is plenty of legitimate breakage due to this missing, it has been awhile since I've run into any major issues with modules in this respect. I have ~35 modules in general use across a number of deployments, with about 10 - 20 of those in use in each deployment. The most complex deployment has nearly 200 resources. I have modules for user data/init, for consul leaders, their networks and asgs, for consul agents, security groups, and modules that layer upon other modules, so it's fair to say I am sufficiently leveraging modules, even with TF in its current state (using 0.6.x, I have not yet migrated any environments to 0.7.x).

Thanks @ketzacoatl
It wasn't online, I've attached a copy here with my keys at ssh keys stripped out, as well as hacks from above that weren't working for me. If you run it it requires AWS credentials, a Route53 zone ID and a logzio access key.

project.zip

It was not an incredibly complex project, just a handful of resources compared to yours, its why it doesn't make sense to me that module outputs are not being waited for when used for the next module, and why its so frustrating to be stuck on this...

One thought that did just cross my mind is the consul server is created using count (its based on the terraform example that comes with consul), I don't know if that prevents it from waiting on the output. I don't know that would explain why the name or id of security group "consulagent" also comes back empty.

Its the docker cluster module that is bombing with the two problems.
1) if you check the security groups on the plan, it says 'computed'
2) The file template is blowing up on variables, if you hard code a value for the ip address the problem goes away.

This is running under Consul 0.6.16

Ah, yes.. I think your consul server output should be using the count index or * in some way, and the way you have it now might be leading to failure with the module dependency (I'm not sure why though). https://www.terraform.io/docs/configuration/resources.html#using-variables-with-count might shed some light on that.

Not that you should do it differently, but just to offer ideas on other ways this can be accomplished... I'm running the consul leaders as an ASG, and use AWS private DNS to advertise where to find the consul leaders. I pass that DNS name on to the hosts that run the consul agent (so any module that runs an agent has a dependency on the consul leaders). For this method to work well, the leaders need to be in their own subnet, and it helps to make that subnet as small as possible (/28 on AWS).
Currently, the downside to this method is that the leader DNS record includes _all_ IPs the leaders _might_ be at, which means the consul agents need to poke at each of those IPs until it finds an actual leader. Surprisingly, this actually works rather well in practice (consul agents will retry forever with the right config), it just means the agents sometimes take a little while to find the leaders. A colleague of mine is working on a solution to that problem (eg, update a DNS record with the IPs of the nodes in the ASG, whenever those nodes change.. so the DNS entry has _only_ the leader IPs, rather than all possible IPs), but that tool isn't yet complete.

Either way, depends_on for modules would be great, but I think you have another issue interfering. I ran into something similar, where a bug + improperly using create_before_destroy would fsck up my module dependencies. Some issues can be difficult to debug in Terraform, but it's a lot better than Cloud-Formation..

@ketzacoatl Okay, I feel like a total idiot, I somehow missed that I forgot the "var." in front of my two variables that kept coming up without values. I'm shocked it didn't give me a more forceful error at the point of referencing them vs saying calculated for the security group or throwing errors inside the template instead of where i was mapping the variables.... now it works.

I'll definitely look at your idea for Consul, my plan was to move it to autoscaling at some point. I'm just learning Consul at this point and trying to get our dev environment scripted.

If there is a place where TF fails me the most, it is with errors telling me where to look, so I can sympathize with that (TF is usually pretty good here, but there are plenty of corner cases where you get a "foo is not right", and you have lots of "foos" to go look through, and no indication of where or which module). Glad you got it sorted @dguisinger. Good luck with Consul, it's great stuff. RE what I noted on Consul, I expect to be publishing the module repo at some point.. still working on docs and polishing.

:+1:

+1

+100

+1

@felnne Can you update the issue to include @phinze 's comment detailed below from May 26th which is a valid workaround?

Just wanted to mention that while we don't yet support whole-module dependencies, there's nothing stopping you from wiring an output of one module into an input of another, which will effectively draw a dependency between the appropriate resources.

I'm not saying this necessarily will solve all use cases, but when I was trying to figure out why I haven't bumped into the need for depends_on = ['module.foo'], I realized that this is what I tend to do in my config.

This is a vaild workaround for most of the use cases and people dont know about it, or cant find it in this issue. People ask about this feature regularly on slack, irc and everyone is still commenting on this issue. Whilst depends_on for modules would make sense in the long run, i think the workaround is ok for now, and people should be aware of it, so lets make it easier for them to find it.

@willejs done - though I don't think, but may be wrong, that's a valid workaround for my original use case specifically.

In my case, the null provisioner I was using didn't actually depend on any of the outputs of the module directly (otherwise Terraform would implicitly recognise the dependency and do things in the right order).

Using the output of the module as the input for another wouldn't help here, and is why I tried to use depends_on to explicitly tell Terraform to wait for the module to be sorted out.

Since reporting this, I have moved away from using modules and do things differently, such that I have a dependency on the IP address of a compute resource, which implicitly tells Terraform to wait until that's ready to do the provisioning, and so works fine.

I would like to keep this open to address the original problem of depending on a module, without using its outputs, as I feel that's still something someone might want to do, and which the workarounds so far don't (as I understand) solve.

Ps. I don't mean to imply you thought I should close this, but I didn't want my now lack of use for this to cause it to close.

Hey @phinze could you provide an example for the workaround you suggested?

Can it be used for aws_ecs_service where it's suggested to use depends_on?

Note: To prevent a race condition during service deletion, make sure to set depends_on to the related aws_iam_role_policy; otherwise, the policy may be destroyed too soon and the ECS service will then get stuck in the DRAINING state.

Also in my case aws_iam_role_policy is being created in a separate terraform environment. I can reference it by using terraform_remote_state.myapp.role and pass it to the module in my current environment. However it creates a dependency on the wholeterraform_remote_state.myapp, not sure it'll prevent the race condition described in the note above.

@mengesb wrote:

I achieve this by passing in an output from another module that hopefully runs near or at the very end into a 'wait_on' input. this wait_on input simply echos the input.

Example:

module "blah" {
foo = "bar"
...
}
module "eww" {
foo = "baz"
...
wait_on = "${module.blah.someoutput}"
}
In the module "eww" I parse that wait_on (default there as well so I don't require it) and push that to a null_resource:

resource "null_resource" "waited_on" {
resource "local-exec" {
command = "echo 'Waited for ${wait_on} to complete"
}
}
While it does work... I really feel that something as simple as depends_on or some other verb such as when should apply here. The use cases for them are different, however the flow control is very necessary to draft one plan that can handle more than one scenario. You can see my work on my repo tf_chef_stack, which calls several of my other plans.

I find that this only works when the module output that you are effectively waiting on is computed close to when the module as actually complete. In other words, Terraform will compute the value for ${wait_on} as soon as it possibly can and then your script carries on its merry way.

This is problematic for me because I have abstracted, via some Maven magic, provider-specific details for spinning up nodes and then have general purpose modules for applying layers (e.g., install docker, install consul as a docker container, install portworx as a docker container depending on consul, ...)

And yes, I have realized what I am doing: https://twitter.com/dweomer/status/784147607786364928.

I have tried chaining the output from one module to input for the next (i.e. ip address, uuid(), etc) but because I am using null_resource and provisioning with remote-exec against hosts that are described by the module inputs, all such values are available once computed and so my dependent layers simply do not block.

I am going to try and add a resource in each module that depends on all other resources in said module with a reasonably idempotent trigger value (provisioner id maybe?) and add that as a module output.

Hmm, no luck. Something as simple as a module and/or provisioner execution time would make this doable.

I was poking around the other day and noticed some discussion of a change to allow capture of stderr/stdout from provisioner scripts/commands/executions (can't find the issue). This would also work for delaying computation.

I got this to work by setting up a null_data_source as a pass-through for module inputs. Making this un-documented datasource depends_on all other resources in the module did the trick, e.g.

modules/docker/main.tf:

data "null_data_source" "docker_layer" {
  depends_on = [
    "null_resource.docker_install",
    "null_resource.docker_info",
  ]

  # passing through maps and lists seems to encounter a translation/boundary issue much like the (defunct since 0.7)
  # module strings-only boundary. so we serialize them here and de-serialize in the output definitions
  inputs {
    docker_engine_version          = "${var.docker_engine_version}"
    docker_host_count              = "${var.docker_host_count}"
    docker_host_private_ipv4_addrs = "${join(" ", var.docker_host_private_ipv4_addrs)}" //
    docker_host_public_ipv4_addrs  = "${join(" ", var.docker_host_public_ipv4_addrs)}"
    docker_ops_login               = "${var.docker_ops_login}"
    docker_ops_key_file            = "${var.docker_ops_key_file}"
    docker_pgp_fingerprint         = "${var.docker_pgp_fingerprint}"
    docker_required_packages       = "${join(" ", sort(var.docker_required_packages))}"
  }
}

modules/docker/outputs.tf:

output "engine_version" {
  value = "${data.null_data_source.docker_layer.outputs["docker_engine_version"]}"
}

output "host_count" {
  value = "${var.docker_host_count}"

  # this pass-through currently generates an interpolation/parsing error when used with arithmetic:
  #   * strconv.ParseInt: parsing "${var.consul_host_count - var.consul_server_count}": invalid syntax
  #
  # value = "${data.null_data_source.layer.outputs["docker_host_count"]}"
}

output "host_public_ipv4_addrs" {
  value = "${split(" ", data.null_data_source.docker_layer.outputs["docker_host_public_ipv4_addrs"])}"
}

output "host_private_ipv4_addrs" {
  value = "${split(" ", data.null_data_source.docker_layer.outputs["docker_host_private_ipv4_addrs"])}"
}

output "ops_login" {
  value = "${data.null_data_source.docker_layer.outputs["docker_ops_login"]}"
}

output "ops_key_file" {
  value = "${data.null_data_source.docker_layer.outputs["docker_ops_key_file"]}"
}

output "pgp_fingerprint" {
  value = "${data.null_data_source.docker_layer.outputs["docker_pgp_fingerprint"]}"
}

output "required_packages" {
  value = "${split(" ", data.null_data_source.docker_layer.outputs["docker_required_packages"])}"
}

cluster/consul.tf:

module "docker" {
  source = "docker"

  docker_host_count              = "${var.cluster_size}"
  docker_host_public_ipv4_addrs  = "${module.provider_cluster.node_public_ipv4_addrs}"
  docker_host_private_ipv4_addrs = "${module.provider_cluster.node_private_ipv4_addrs}"

  docker_ops_login    = "${var.cluster_ops_login}"
  docker_ops_key_file = "${var.cluster_ops_key_file}"

  docker_required_packages = "${var.cluster_required_packages}"
}

module "consul" {
  source = "consul"

  consul_host_count              = "${module.docker.host_count}"
  consul_host_public_ipv4_addrs  = "${module.docker.host_public_ipv4_addrs}"
  consul_host_private_ipv4_addrs = "${module.docker.host_private_ipv4_addrs}"

  consul_ops_login    = "${module.docker.ops_login}"
  consul_ops_key_file = "${module.docker.ops_key_file}"
}

@kristjanelias workaround did the job for me. Thanks.

Still a big +1 for the module dependency feature. It leads to longer dependency tree depth (and longer infrastructure spawn times) but it still feels like for some use cases it would really makes things much simpler.

Honestly I don't see a problem with current implementation. Modules don't depend on each other, resources inside them do. All you achieve with depends_on on modules is slower execution as it would make TF to wait for _all_ resources in one module to complete before proceeding to next one, even if real dependency is between just few resources within 2 modules.

FWIW, we've seen some undesirable side effects to having implicit
dependencies in the current implementation - most notably unexpected
dependency cycles which occur when changing variables that impact two
modules - optional explicit depends_on parameters might mitigate these
issues.

Currently the only way that we can workaround the dependency cycles is to
remove one of the module definitions, run a plan and apply, and then re-add
them, which is far from ideal.

On Sat, 15 Oct 2016, 10:43 Maxim Ivanov, [email protected] wrote:

Honestly I don't see a problem with current implementation. Modules don't
depend on each other, resources inside them do. All you achieve with
depends_on on modules is slower execution as it would make TF to wait for
_all_ resources in one module to complete before proceeding to next one,
even if real dependency is between just few resources within 2 modules.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/1178#issuecomment-253971361,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABQp3aa9kKWUfo_FG1U4uO_Eduy9QP4uks5q0JJOgaJpZM4Ds2Ro
.

Maxim, sounds like you,re looking at a specific use case. There are several situations where you need to wait for module x to be done before doing module y.  Think of a situation where you want to update a kernel before installing docker and need to reboot.  There many times that you will want to have modules rather then one gynormous resource.
On Oct 15, 2016 2:45 AM, Maxim Ivanov [email protected] wrote:Honestly I don't see a problem with current implementation. Modules don't depend on each other, resources inside them do. All you achieve with depends_on on modules is slower execution as it would make TF to wait for all resources in one module to complete before proceeding to next one, even if real dependency is between just few resources within 2 modules.

—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.

+1

+1

+1 My use case is I have a module to create my consul cluster, then a module to set up docker swarm mode cluster.. I would like to be able to spin up my consul cluster first so when my userdata runs on the swarm mode cluster it can store my encrypted manager/worker tokens in consul and use it to bootstrap those nodes.

@knehring you can do a couple things for what you are describing.

  1. set up an internal consul elb that the servers attach to. then have your docker hosts require the outputted value of the elb.
  2. this also works for me: https://gist.github.com/jkordish/95bd29084ec2907cf60697ccfc66e553
    essentially just utilize the cloud-init.sh for the user-data and have it wait until the servers are up.

👍

Pretty Please.

My use case is an AWS ECS cluster (module)
and many AWS ECS services. It tries to create the ECS services before the cluster even though I need to pass the cluster ID from the cluster module to the service.

@myoung34 are you sure? how would it know ECS cluster id to pass to services if not creating it first? are you saying it passes something else instead?

@redbaron Nope, mine was a different issue. I'm unrelated

One side of this (depends_on referencing a module) is coming in Terraform 0.8: https://github.com/hashicorp/terraform/pull/10076

+1

👍

@mitchellh

How to make the module accept depends_on?

The PR #10076 makes the resource to depends_on the module

 resource aws_instance "web" {
    depends_on = ["module.foo"]
}

But how is the opposite way?

module "app" {
   depends_on = ["aws_instance.web"]
}

I got below error.

Error getting plugins: module root: module app: depends_on is not a valid parameter

terraform version is v0.10.3

Updates:

Seems the problem has been reported in #10462, but that ticket is about module to depends_on another module

In my case, I just need to depends_on another resource in the module.

+1

Hello! Im using this scrpit to run plan and apply in the right sequence.
first, mkdir deploy-history
To run use:
$ ./script.sh module1 module2 module3 module4
where moduleN is the nodule name in main.tf. remember to put modules in the right sequence of creation based on your dependencies.

!/bin/bash

for mod in $@
do
terraform plan -target=module.$mod -out deploy-history/$mod.out && terraform apply deploy-history/$mod.out
if [ "$?" -ne "0" ]
then
echo "[ERROR] Module: ${mod^^}"
exit 1
fi

done

I solved my problems using depends_on in the output block

I tried to create some aws_subnet which depends on a aws_vpc & some aws_vpc_ipv4_cidr_block_association

The problem was I grouped the aws_vpc and aws_vpc_ipv4_cidr_block_association under the same module. While the aws_subnet only directly depends on the aws_vpc it still needs the aws_vpc_ipv4_cidr_block_association to be properly initialized.

So I marked the vpc_id output to wait for the initialization of aws_vpc_ipv4_cidr_block_association and everything worked out correctly

output "vpc_id" {
  depends_on  = ["aws_vpc_ipv4_cidr_block_association.secondary_cidrs"]
  description = "ID of the VPC"
  value       = "${aws_vpc.vpc.id}"
}

Honestly I don't see a problem with current implementation. Modules don't depend on each other, resources inside them do.

This is exactly correct.
The trick you can do to create a dependency between two resources is to simply add some output of the resource you need to depend on as input to the thing that has a dependency (almost always this happens automatically since this is how the DAG is build in the first place)

You can put this value in a tag or some harmless place our you can mess around with it to only make Terraform think it is a dependency:

locals {
  dependent_name = "${var.client_name},${var.dependency}"
  name = "${element(split(",", local.dependent_name), 0 )}"
}
variable "dependency" {}

and now use
name = "${local.name}"
in the resource that you want to depend on something

To work around this, I've done the following:

  • Added the following variable to each of my modules:
variable "dependencies" {
  type = "list"
}
  • Added the following resource.null_resource to the beginning of each of my modules:
resource "null_resource" "dependency_getter" {
  provisioner "local-exec" {
    command = "echo ${length(var.dependencies)}"
  }
}
  • Added the following depends_on attribute to all resource(s) that will be constructed first within the module:
depends_on = [
  "null_resource.dependency_getter",
]
  • Added the following resource.null_resource to the end of each of my modules:
resource "null_resource" "dependency_setter" {
  depends_on = [
    # List resource(s) that will be constructed last within the module.
  ]
}
  • Added the following output to each of my modules:
output "depended_on" {
  value = "${null_resource.dependency_setter.id}"
}

This allows me to use a module in the following manner:

module "devops_cluster" {
  source           = "./Infrastructure/devops-gke-cluster"
  gcp_project_name = "${var.gcp_project_name}"
  gcp_zone         = "${var.gcp_zone}"

  providers = {
    kubernetes = "kubernetes.devops"
    helm       = "helm.devops"
  }

  dependencies = [
    "${module.devops_dns_zone.depended_on}",
  ]
}

This also makes it possible to specify dependencies on multiple modules.

+1 we really need this feature

Seems like this issue should be reopened, since only one half of it is now supported? There's also #10462 (which is still open) but it was created in 2016...is that the right place to watch for updates on this issue?

+1. Really need this feature, as I already hit the problem a few times.

+1

+1 This feature will be awesome

Can you please re-open this issue? I think this was closed by accident. Only half of this was implemented. @phinze @mitchellh

Everyone +1ing this: Please stop! +1 the issue on top, don't spam with comments. Thanks!

Yeah we need this.

We needs this, too.

Think of this workflow:
EKS -> RDS -> Application Deployment with Helm -> API Gateway -> Cloudfront

Without those dependency you end up e.g. data.kubernetes_service failing with "connection refused" as the EKS does not exist before initial rollout.

I am defining ECS task definitions and services as a module and would like them to wait on the ECS and/or load balancer (since terraform does not seem to do this on its own). So I would very much like the other half of this to be reopened and completed.

@charles-salmon Great workaround! Thank you so much for sharing!

For my personal use case, I modified the approach a bit to ensure the dependencies are enforced on all Terraform runs and not just the initial one.

I changed the output to be.

output "depended_on" {
  value = "${null_resource.dependency_setter.id}-${timestamp()}"
}

I used the timestamp() function here to ensure this will force a change each run. This helps when resources in the depended on module change and we want to ensure the dependent module will respect those changes.

I also changed the null_resource.dependency_getter to be.

resource "null_resource" "dependency_getter" {
  triggers = {
    my_dependencies = "${join(",", var.dependencies)}"
  }
}

This uses the triggers sub-block to ensure it runs on all runs and not just on creation like the provisioner block.

Still can't add depends_on to modules from terraform module registry

@charles-salmon Great workaround! Thank you so much for sharing!

For my personal use case, I modified the approach a bit to ensure the dependencies are enforced on all Terraform runs and not just the initial one.

I changed the output to be.

output "depended_on" {
  value = "${null_resource.dependency_setter.id}-${timestamp()}"
}

I used the timestamp() function here to ensure this will force a change each run. This helps when resources in the depended on module change and we want to ensure the dependent module will respect those changes.

I also changed the null_resource.dependency_getter to be.

resource "null_resource" "dependency_getter" {
  triggers {
    my_dependencies = "${join(",", var.dependencies)}"
  }
}

This uses the triggers sub-block to ensure it runs on all runs and not just on creation like the provisioner block.

an equals sign after the "triggers" did the trick for me in TF 0.12

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings