Hi all,
I'm trying to write a module for AWS Elasticache service and actually I have two resources:
The dependence is not implicit, since I would want to make it optional in order to create a custom parameter group and configure the replication group to use it and otherwise configure in such a way that it will use the default one.
During creation all works fine, but if I have the two resources created and I want to restore the default parameter group (thus also remove the custom one), the change and destroy order does not work in the order in which I would have expected, despite the explicit dependence.
If I am wrong, sorry, I'm a TF newbie :-)
0.11.8
modules/aws_elasticache/variables.tf
# [...]
variable "parameter_group_name" {
description = "The name of the parameter group to use"
default = ""
}
variable "create_parameter_group" {
description = "True to create a custom parameter group"
default = false
}
# [...]
modules/aws_elasticache/main.tf
locals {
# Ugly workaround
parameter_group_name = "${coalesce(var.parameter_group_name, (length(aws_elasticache_parameter_group.this.*.id) == 0 ? "" : element(concat(list(""), aws_elasticache_parameter_group.this.*.id), 1)))}"
enable_create_parameter_group = "${var.parameter_group_name == "" ? var.create_parameter_group : 0}"
}
# Cluster
resource "aws_elasticache_replication_group" "this" {
depends_on = ["aws_elasticache_parameter_group.this"]
# [...]
parameter_group_name = "${local.parameter_group_name}"
# [...]
}
# Parameter group
resource "aws_elasticache_parameter_group" "this" {
count = "${local.enable_create_parameter_group}"
# [...]
}
main.tf
module "cache" {
source = "./modules/aws_elasticache"
# [...]
#create_parameter_group = true
#parameters = "${local.workspace_lists["cache_parameters"]}"
#parameter_group_family = "redis5.0"
parameter_group_name = "default.redis5.0"
# [...]
}
First it should be changed the replication group's resource and then destroyed the parameter group resource.
It attempts first to destroy the parameter group resource, but since the replication group resource depends on it, it fails.
module "cache" {
source = "./modules/aws_elasticache"
# [...]
create_parameter_group = true
# [...]
}
terraform planterraform apply
It creates the two resources, the replication group dependent on the parameter group.
main.tf
module "cache" {
source = "./modules/aws_elasticache"
# [...]
parameter_group_name = "default.redis5.0"
# [...]
}
terraform planterraform applyError: Error applying plan:
1 error(s) occurred:
* module.cache.aws_elasticache_parameter_group.this (destroy): 1 error(s) occurred:
* aws_elasticache_parameter_group.this: InvalidCacheParameterGroupState: One or more cache clusters are still members of this parameter group custom-pg, so the group cannot be deleted.
status code: 400, request id: ...
Thank you very much!
I am hitting this same problem right now. I create a network and then a firewall for the network, but it fails the delete because it tries to destroy the network first, upon which the firewall relies:
* google_compute_network.default: The network resource 'projects/jumpserver/global/networks/test-network' is already being used by 'projects/jumpserver/global/
firewalls/test-firewall'
I have to imagine there is a means to get it to destroy in the correct order without swapping the blocks in the config file. Help appreciated. :)
Maybe try setting the resource's lifecycle to create before destroy?
lifecycle {
create_before_destroy=true
}
Useful reference
https://www.hashicorp.com/blog/zero-downtime-updates-with-terraform
Thank you @gpanula :-) but in my case there's no need to destroy-and-recreate a resource, but to update a resource and only then, destroy the resource on which the first resource depends on.
@maxgio92 have you confirmed the dependency graph in this case (using terraform graph)?
@paultyng yes
@maxgio92 do you have a workaround for this problem?
I'm hitting the same issue with openstack provider, during terraform destroy, terraform tries to destroy first the network while the network have floating ips created, I would expect terraform to delete any resource which depends on the network to destroy. How can I workaround this?
I had the same issue today targeting Openstack with Terraform v0.12.2 and provider.openstack v1.19.0. TF tried to destroy sub network first while the VM using it and a related port still was there. I had to remove this VM manually and then everything went well.
After I added "depends_on" clause to my compute definition it worked very well so the destroy process worked in proper order:
resource "openstack_compute_instance_v2" "my-vm" {
......
depends_on = [
"openstack_networking_subnet_v2.my-subnet"
]
.....
}
@mangobatao I'm sorry but not yet :-(
We are currently experiencing problems we believe are similar to this. We are relying on Terraform using implicit dependencies to figure out the order for creating resources. So far, 100% of the time it does the right thing. However, when it comes time for destroying the resources, it seems like 50% of the time it deletes them in the wrong order, meaning the resource that has a dependency is deleted first and then it attempts to delete the dependent resource next, which fails.
I encoutered same problem while destroy aws_api_gateway_deployment, I recieved error:
Error: error deleting API Gateway Deployment (tg332y): BadRequestException: Active stages pointing to this deployment must be moved or deleted
status code: 400, request id: 24c14f76-766a-44c4-9610-26ca876ffe08
The error happend because aws_api_gateway_base_path_mapping is still exist although I have added to it an property: depends_on API Gateway Deployment
It seem destroy action is not in correct order base on depends_on property
My workaround is combine terraform with aws cli in provisioner property to remove base_path_mapping before deployment is destroyed
I hope this issue will be solved as soon as possible, or anyone have any idea, you're welcome?

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment