Terraform-provider-aws: Modules inter-dependencies not resolving

Created on 6 Jul 2017  Â·  16Comments  Â·  Source: hashicorp/terraform-provider-aws

Hi there,

I initially described this issue in one of the comments in https://github.com/hashicorp/terraform/issues/10462#issuecomment-313259912 but was asked by @apparentlymart to open a separate issue on that.

Below is an example of my problem. Basically, I have two modules, where input parameter to the second module is the output variable computed from the first module, and it does not look like the first module is created before the second module is tried to be instantiated.

So, here’s how my first module is defined which creates a security_group which it then exposes through its output variable below:

module "ecs_elb_security_group" {
  source = "git::ssh://<lib-repository-for-general-resources>//security_group?ref=feature/ecs_v2"

  name="nodify-elb"
  description="Security Group for ECS ELB"

  environment_name="${var.environment_name}"
  environment_type="${var.environment_type}"
  vpc_id="${var.vpc_id}"
}

output "id" {
  value = "${module.ecs_elb_security_group.id}"
}

And this is instantiation of another module which relies on the computed ecs_elb_security_group.id value from above:

module "ecs_elb" {
  source = "git::ssh://<lib-repository-for-general-resources>//classic_load_balancer?ref=feature/ecs_v2"

  environment_name="${var.environment_name}"
  environment_type="${var.environment_type}"

  elb_name="nodify-ecs"
  aws_region="${var.aws_region}"
  subnets="${var.ecs_elb_subnets}"

  //This will be internet facing service
  internal = false

  listeners = [
    {
      lb_port = "${var.nodify_elb_port}"
      lb_protocol = "tcp"
      instance_port = "${var.nodify_docker_host_port}"
      instance_protocol = "tcp"
    }
  ]

  healthcheck_target="HTTP:${var.nodify_elb_port}/"

  security_groups = "${module.ecs_elb_security_group.id}"
}

So, from above, “"${module.ecs_elb_security_group.id}"” value should already be computed when passed to security_groups of the “ecs_elb” module, but it’s not. I believe this does fall into the same realm as other use cases in this ticket? Terraform needs to be able to compute that “ecs_elb_security_group” module is a dependency to “ecs_elb” module and should already instantiate all resources for the “ecs_elb_security_group” before instantiating “ecs_elb” module.

Thanks.

bug upstream-terraform

Most helpful comment

Hi everyone đź‘‹ Sorry for any frustration you have been running into.

Its probably worth starting off here that there are quite a few unrelated bugs reported here. Almost all of them at quick glance (including the original post) likely belong upstream in Terraform core so they can be properly identified and triaged. Terraform core handles dependency ordering, the configuration language itself (e.g. defining what a module actually is), and the generic resource handling of count, depends_on, and lifecycle configurations.

For anyone running into issues dealing with the lack of strong typing in Terraform 0.11 and below (e.g. should be a list errors), there are some upstream issues in Terraform core that track this updated handling and potentially are good references for trying to further track the fix for your situation:

For anyone specifically looking for information about module dependency handling, I would recommend tracking that in the original https://github.com/hashicorp/terraform/issues/10462

For anyone specifically getting the ECS does not have an associated load balancer error, I would recommend tracking that in https://github.com/terraform-providers/terraform-provider-aws/issues/3495

Since all of the above reports seem to be related to upstream code or issues with the exception of the ECS one, I am going to close this seemingly catch-all issue. If you do happen to have specific cases you would like investigated please feel free to open new issues upstream or here, with all the relevant details after checking to ensure there is not something similar open. 👍

The good news is that there are some large improvements coming in the next version of Terraform, Terraform 0.12, that should help alleviate at least some of these described issues. A high level sneak peek of some of the upcoming features/fixes can be found at: https://www.hashicorp.com/blog/terraform-0-1-2-preview

All 16 comments

Thanks for filing this as a new issue @dtserekhman-starz!

In order to dig into this it would be very useful to have some of the other things that were included in the _new issue_ template you deleted here:

  • Terraform Version: Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
  • Debug Output: Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

The above will help us to understand the shape of the Terraform graph while running your config, which is important to understand why things happen in the order they do.

Thanks!

@apparentlymart ,

My Terraform version is the latest at the moment:

terraform --version
Terraform v0.9.11

Please see link to debug output below.

This is a debug output from "terraform plan" command which results with this error:

1 error(s) occurred:

  • module.nodify_main.module.ecs_elb.aws_elb.classic_elb: security_groups: should be a list

When I hardcode security_groups = "${module.ecs_elb_security_group.id}" (from the above code snippet) to an empty string, the "terraform plan" command succeeds.

https://gist.github.com/dtserekhman-starz/1e0414d5497b49d8a1180af73b56aa1a

Great! Thanks for these details, @dtserekhman-starz!

I am also experiencing this same issue. I have a handful of modules to provision various kinds of ECS resources, ALBs and target groups.

Here is a snippet of how these modules are being used

# create ALB
module "hello-world_alb" {
  source = "..."

  name                         = "hello-world"
  environment                  = "production"
  security_group_ids           = ["${aws_security_group.hello-world-alb.id}"]
  subnet_ids                   = "${var.hello-world-private-subnets}"
  create_dns_record            = false
  internal                     = true
}

# create alb target group
module "hello-world_alb_target_group" {
  source = "..."

  name                         = "hello-world"
  alb_arn                      = "${module.hello-world_alb.id}"
  cost_center                  = "Microservices"
  environment                  = "production"
  container_port               = 8080
  vpc_id                       = "${var.hello-world-vpc-id}"
  healthcheck_path             = "/healthcheck"
  port                         = 8080
  protocol                     = "HTTP"
}

module "hello-world_task_definition" {
  source = "..."

  name                  = "hello-world"
  container_definitions = "${data.template_file.hello-world.rendered}"
  task_role_arn         = "arn:aws:iam::x:role/service-hello-world"
}

# create ecs service
module "hello-world_service" {
  source                             = "..."
  name                               = "hello-world"
  cluster                            = "${var.hello-world-cluster-arn}"
  task_definition                    = "${module.hello-world_task_definition.arn}"
  placement_strategy_type            = "spread"
  placement_strategy_field           = "instanceId"
  placement_constraints_type         = "memberOf"
  placement_constraints_expression   = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b, us-west-2c]"
  desired_count                      = 0
  deployment_minimum_healthy_percent = 0
  deployment_maximum_percent         = 100
  container_name                     = "hello-world"
  target_group_arn                   = "${module.hello-world_alb_target_group.id}"
  container_port                     = 8080
}

If I terraform plan && terraform apply I get

* aws_ecs_service.main: InvalidParameterException: The target group with targetGroupArn arn:aws:elasticloadbalancing:us-west-2:542640492856:targetgroup/xxx/cf432638d2cd0967 does not have an associated load balancer.

If I run terraform plan and terraform apply for a second time then everything goes as expected.

Using terraform 0.9.9.

same as mine

Terraform v0.9.11

I think am facing the same issue. Any solution for this?

module "public_subnet" {
source = "../../modules/network/subnet/public"
env = "${var.env}"
vpc_id = "${module.vpc.vpc_id}"
availability_zones = "${var.availability_zones}"
internet_gateway_id = "${module.internet_gateway.gateway_id}"
public_subnet_cidr = "${var.public_subnet_cidr}"
}

output "public_subnet_id" {
value = "${aws_subnet.public_sn.id}"
}

module "nat_gateway" {
source = "../../modules/network/gateway/nat"
env = "${var.env}"
vpc_id = "${module.vpc.vpc_id}"
public_subnet_cidr = "${var.public_subnet_cidr}"
public_sn = "${module.public_subnet.public_subnet_id}"
}

I expect "${module.public_subnet.public_subnet_id}" computed and passed to nat_gateway module

but I am getting following error while nat gateway is being created.

Error applying plan:

  • aws_nat_gateway.nat_gw.0: Error creating NAT Gateway: InvalidParameter: 1 validation error detected: Value '' at 'subnetId' failed to satisfy const
    raint: Member must have length greater than or equal to 1
    status code: 400, request id: 51b5cc30-0ae3-4d74-8ed9-2e2b35d36ba8

Another example....
`module "ecomtest_us-west-2_securitygroup" {
source = "../../../../modules/aws_securitygroup_pws"
vpc_name = "${var.name}"
platform_name = "${var.platform_name}"
env = "${var.env_name}"
region = "${var.regions_west}"
profile_name = "${var.profile}"
}

module "ecomtest_us-east-1_securitygroup" {
source = "../../../../modules/aws_securitygroup_pws"
vpc_name = "${var.name}"
platform_name = "${var.platform_name}"
env = "${var.env_name}"
region = "${var.regions_east}"
profile_name = "${var.profile}"
}

module "ecomtest_us-west-2_db_instance" {
source = "../../../../modules/aws_db_instance"
region = "${var.regions_west}"
create = "${var.west_create}"
profile_name = "${var.profile}"
vpc_name = "${var.name}"
env = "${var.env_name}"
platform_name = "${var.platform_name}"
instance_class = "${var.db_nodetype}"
multiaz = "${var.db_multi}"
subnet_group_name = "${module.ecomtest_us-west-2_db_subnet_group.db_subnet_group}"
option_group_name = "${module.ecomtest_us-west-2_db_option_group.db_option_group}"
parameter_group_name = "${module.ecomtest_us-west-2_db_parameter_group.db_parameter_group}"
backup_window = "${var.db_backup_window}"
maintenance_window = "${var.db_maintenance_window}"
public = "${var.db_public}"
storage = "${var.db_storage}"
storage_type = "${var.db_storage_type}"
engine = "${var.db_engine}"
engine_version = "${var.db_engine_version}"
username = "${var.db_default_username}"
password = "${var.db_default_password}"
}

module "ecomtest_us-east-1_db_instance" {
source = "../../../../modules/aws_db_instance"
region = "${var.regions_east}"
create = "${var.east_create}"
profile_name = "${var.profile}"
vpc_name = "${var.name}"
env = "${var.env_name}"
platform_name = "${var.platform_name}"
instance_class = "${var.db_nodetype}"
multiaz = "${var.db_multi}"
subnet_group_name = "${module.ecomtest_us-east-1_db_subnet_group.db_subnet_group}"
option_group_name = "${module.ecomtest_us-east-1_db_option_group.db_option_group}"
parameter_group_name = "${module.ecomtest_us-east-1_db_parameter_group.db_parameter_group}"
backup_window = "${var.db_backup_window}"
maintenance_window = "${var.db_maintenance_window}"
public = "${var.db_public}"
storage = "${var.db_storage}"
storage_type = "${var.db_storage_type}"
engine = "${var.db_engine}"
engine_version = "${var.db_engine_version}"
username = "${var.db_default_username}"
password = "${var.db_default_password}"
}
`

So I run the plan and it tries to figure out the data for a security group I create in the security group module above which I output. I like to query aws for the current info rather than reference the module as it seems cleaner and let's me know exactly what I am looking for in the module. If I had depends_on for another module that would work but at this point the only way for me to get around is referencing the above module as the input to the db module. With the above sample there would be no way for terraform to deduce the dependency.

I am facing the same in 0.10.2 as well.

Either a simple reproducer or another issue, but I ran into this passing a resource reference into a module and boiled it down to the simple resource file & module attached. In this case I'm creating an IAM policy resource and passing the ARN into a module that takes an array of roles. I get the error below when planning:

module.federated_myrole_support.aws_iam_role_policy_attachment.managed_attachment: aws_iam_role_policy_attachment.managed_attachment: value of 'count' cannot be computed

module_dependency_bug.zip

Another use case that may be in the same ballpark came up today.

We are standardizing CloudWatch alarms across a number of different resources, and had hoped to create a simple module that would take a list of instance IDs to apply to instead of copy/pasting the same boilerplate CW alarm resources everywhere.

The attachment is a minimal repro, consisting of an EC2 instance and a monitoring module with a CW alarm.

  1. When starting at count of 1 and creating both the instance and alarm simultaneously, creation fails with "module.cloudwatch.aws_cloudwatch_metric_alarm.cpu-usage: aws_cloudwatch_metric_alarm.cpu-usage: value of 'count' cannot be computed"
  2. Comment the cloudwatch module, create the instance, uncomment cloudwatch, create the alarm. No problem
  3. Increase count of aws_instance to 2 - fails again with the count cannot be computed error.
  4. Again comment out cloudwatch, increase to 2, create instance, create cloudwatch, no problem.
  5. Decrease count from 2 to 1, expected alarm and instance are deleted

I would have expected something along the lines of the behavior we see elsewhere when an ID or other attribute isn't available. No amount of join/split tricks helped here, but maybe depends_on for modules would have?

bug_alarm_module.zip

I am encountering this too with version 0.11.2 (the exact same thing @gustavosoares described)

I am having exactly the same issue described in the issue description with version v0.11.4.

I'm experiencing the same thing as described in the issue on v0.11.4, [email protected]

Any news here? We just wanted to start refactoring our directory tree with Terraform code (applied via Terragrunt) to a single level file with modules, but this has become our showstopper now.

Hi everyone đź‘‹ Sorry for any frustration you have been running into.

Its probably worth starting off here that there are quite a few unrelated bugs reported here. Almost all of them at quick glance (including the original post) likely belong upstream in Terraform core so they can be properly identified and triaged. Terraform core handles dependency ordering, the configuration language itself (e.g. defining what a module actually is), and the generic resource handling of count, depends_on, and lifecycle configurations.

For anyone running into issues dealing with the lack of strong typing in Terraform 0.11 and below (e.g. should be a list errors), there are some upstream issues in Terraform core that track this updated handling and potentially are good references for trying to further track the fix for your situation:

For anyone specifically looking for information about module dependency handling, I would recommend tracking that in the original https://github.com/hashicorp/terraform/issues/10462

For anyone specifically getting the ECS does not have an associated load balancer error, I would recommend tracking that in https://github.com/terraform-providers/terraform-provider-aws/issues/3495

Since all of the above reports seem to be related to upstream code or issues with the exception of the ECS one, I am going to close this seemingly catch-all issue. If you do happen to have specific cases you would like investigated please feel free to open new issues upstream or here, with all the relevant details after checking to ensure there is not something similar open. 👍

The good news is that there are some large improvements coming in the next version of Terraform, Terraform 0.12, that should help alleviate at least some of these described issues. A high level sneak peek of some of the upcoming features/fixes can be found at: https://www.hashicorp.com/blog/terraform-0-1-2-preview

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings