For some reason the AWS Subnet doesn't know that a DB subnet must be destroyed first before any of the subnets on which it relies can also be deleted.
Hey @thegranddesign – that may depend on the configuration you're using, I imagine. Do you have an example config (minus any secret data) that demonstrates this? If they are not directly referenced, then Terraform may not know the ordering required.
Let us know if you can reproduce this!
I can't destroy this config, if you wanted an example:
# Specify the provider and access details
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "${var.aws_region}"
}
# Create a VPC to launch our instances into
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
}
# Create an internet gateway to give our subnet access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
}
# Grant the VPC internet access on its main route table
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.default.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
# Create a subnet to launch our instances into
resource "aws_subnet" "default" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
}
# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
name = "terraform_example_elb"
description = "Used in the terraform"
vpc_id = "${aws_vpc.default.id}"
# HTTP access from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Our default security group to access
# the instances over SSH and HTTP
resource "aws_security_group" "default" {
name = "terraform_example"
description = "Used in the terraform"
vpc_id = "${aws_vpc.default.id}"
# SSH access from anywhere
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP access from the VPC
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "web" {
name = "terraform-example-elb"
subnets = ["${aws_subnet.default.id}"]
security_groups = ["${aws_security_group.elb.id}"]
instances = ["${aws_instance.web.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
# resource "aws_key_pair" "auth" {
# key_name = "${var.aws_key_name}"
# public_key = "${file(var.aws_public_key_file_path)}"
# }
resource "aws_instance" "web" {
# The connection block tells our provisioner how to
# communicate with the resource (instance)
connection {
# The default username for our AMI
user = "ubuntu"
key_file = "${var.aws_key_path}"
# The connection will use the local SSH agent for authentication.
}
instance_type = "m1.small"
# Lookup the correct AMI based on the region
# we specified
ami = "${lookup(var.aws_amis, var.aws_region)}"
# The name of our SSH keypair we created above.
# key_name = "${aws_key_pair.auth.id}"
key_name = "${var.aws_key_name}"
# Our Security group to allow HTTP and SSH access
vpc_security_group_ids = ["${aws_security_group.default.id}"]
# We're going to launch into the same subnet as our ELB. In a production
# environment it's more common to have a separate private subnet for
# backend instances.
subnet_id = "${aws_subnet.default.id}"
# We run a remote provisioner on the instance after creating it.
# In this case, we just install nginx and start it. By default,
# this should be on port 80
provisioner "remote-exec" {
inline = [
"sudo apt-get -y update",
"sudo apt-get -y install nginx",
"sudo service nginx start"
]
}
}
Hey Friends –
I apologize for the delay in response.
@bitemyapp – sorry, I wasn't able to reproduce your example on the most recent version of Terraform (v0.6.16 at time of writing). I apologize for not getting to this sooner, but I believe the source of the issue as since been addressed.
@thegranddesign – apologies to you, too, for taking so long. I can't reproduce what you're describing, but again perhaps the bug has been addressed. I applied this config:
resource "aws_db_subnet_group" "rds_one" {
name = "rds_one_db"
description = "db subnets for rds_one"
subnet_ids = ["${aws_subnet.main_east.id}", "${aws_subnet.other_east.id}"]
tags {
Name = "testing"
}
}
resource "aws_subnet" "main_east" {
vpc_id = "${aws_vpc.foo_east.id}"
availability_zone = "us-west-2a"
cidr_block = "10.0.1.0/24"
tags {
Name = "subnet-count-test"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_subnet" "other_east" {
vpc_id = "${aws_vpc.foo_east.id}"
availability_zone = "us-west-2b"
cidr_block = "10.0.2.0/24"
tags {
Name = "subnet-count-test-other"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_vpc" "foo_east" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags {
Name = "rds-subnet-vpc"
}
lifecycle {
create_before_destroy = true
}
}
It plan, applied, and destroyed correctly:
$ tf plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Your plan was also saved to the path below. Call the "apply" subcommand
with this plan file and Terraform will exactly execute this execution
plan.
Path: create.tfplan
+ aws_db_subnet_group.rds_one
arn: "<computed>"
description: "db subnets for rds_one"
name: "rds_one_db"
subnet_ids.#: "<computed>"
tags.%: "1"
tags.Name: "testing"
+ aws_subnet.main_east
availability_zone: "us-west-2a"
cidr_block: "10.0.1.0/24"
map_public_ip_on_launch: "false"
tags.%: "1"
tags.Name: "subnet-count-test"
vpc_id: "${aws_vpc.foo_east.id}"
+ aws_subnet.other_east
availability_zone: "us-west-2b"
cidr_block: "10.0.2.0/24"
map_public_ip_on_launch: "false"
tags.%: "1"
tags.Name: "subnet-count-test-other"
vpc_id: "${aws_vpc.foo_east.id}"
+ aws_vpc.foo_east
cidr_block: "10.0.0.0/16"
default_network_acl_id: "<computed>"
default_security_group_id: "<computed>"
dhcp_options_id: "<computed>"
enable_classiclink: "<computed>"
enable_dns_hostnames: "true"
enable_dns_support: "<computed>"
instance_tenancy: "<computed>"
main_route_table_id: "<computed>"
tags.%: "1"
tags.Name: "rds-subnet-vpc"
Plan: 4 to add, 0 to change, 0 to destroy.
$ tf apply
aws_vpc.foo_east: Creating...
cidr_block: "" => "10.0.0.0/16"
default_network_acl_id: "" => "<computed>"
default_security_group_id: "" => "<computed>"
dhcp_options_id: "" => "<computed>"
enable_classiclink: "" => "<computed>"
enable_dns_hostnames: "" => "true"
enable_dns_support: "" => "<computed>"
instance_tenancy: "" => "<computed>"
main_route_table_id: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "rds-subnet-vpc"
aws_vpc.foo_east: Still creating... (10s elapsed)
aws_vpc.foo_east: Creation complete
aws_subnet.other_east: Creating...
availability_zone: "" => "us-west-2b"
cidr_block: "" => "10.0.2.0/24"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "subnet-count-test-other"
vpc_id: "" => "vpc-0ae17a6e"
aws_subnet.main_east: Creating...
availability_zone: "" => "us-west-2a"
cidr_block: "" => "10.0.1.0/24"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "subnet-count-test"
vpc_id: "" => "vpc-0ae17a6e"
aws_subnet.other_east: Creation complete
aws_subnet.main_east: Creation complete
aws_db_subnet_group.rds_one: Creating...
arn: "" => "<computed>"
description: "" => "db subnets for rds_one"
name: "" => "rds_one_db"
subnet_ids.#: "" => "2"
subnet_ids.3994863395: "" => "subnet-2cb82548"
subnet_ids.71858993: "" => "subnet-878236f1"
tags.%: "" => "1"
tags.Name: "" => "testing"
aws_db_subnet_group.rds_one: Creation complete
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
$ tf destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_vpc.foo_east: Refreshing state... (ID: vpc-0ae17a6e)
aws_subnet.other_east: Refreshing state... (ID: subnet-2cb82548)
aws_subnet.main_east: Refreshing state... (ID: subnet-878236f1)
aws_db_subnet_group.rds_one: Refreshing state... (ID: rds_one_db)
aws_db_subnet_group.rds_one: Destroying...
aws_db_subnet_group.rds_one: Destruction complete
aws_subnet.other_east: Destroying...
aws_subnet.main_east: Destroying...
aws_subnet.main_east: Destruction complete
aws_subnet.other_east: Destruction complete
aws_vpc.foo_east: Destroying...
aws_vpc.foo_east: Destruction complete
Apply complete! Resources: 0 added, 0 changed, 4 destroyed.
You can see by the output too that it correctly destroys the subnet group first. Perhaps the configuration was not like mine, and not referencing the subnets via interpolation?
Let me know if you have anymore information, otherwise I'll close this. Thanks, and sorry for the delay!
@catsby thanks for the response. I don't have any create_before_destroy in my config. Would that make a difference? If so, why should I have to know which resources need "created before destroyed"? Shouldn't the algorithm handle that?
Hey @thegranddesign –Â
I don't have any create_before_destroy in my config. Would that make a difference?
No, I don't believe it should. This config works as well:
resource "aws_db_subnet_group" "rds_one" {
name = "rds_one_db"
description = "db subnets for rds_one"
subnet_ids = ["${aws_subnet.main_east.id}", "${aws_subnet.other_east.id}"]
}
resource "aws_subnet" "main_east" {
vpc_id = "${aws_vpc.foo_east.id}"
availability_zone = "us-west-2a"
cidr_block = "10.0.1.0/24"
}
resource "aws_subnet" "other_east" {
vpc_id = "${aws_vpc.foo_east.id}"
availability_zone = "us-west-2b"
cidr_block = "10.0.2.0/24"
}
resource "aws_vpc" "foo_east" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
}
The destroy plan looks like this:

This shows that the db subnet is the first thing that gets destroyed.
The destroy works as expected:
aws_vpc.foo_east: Refreshing state... (ID: vpc-22bd2646)
aws_subnet.main_east: Refreshing state... (ID: subnet-2e61d558)
aws_subnet.other_east: Refreshing state... (ID: subnet-b85ec3dc)
aws_db_subnet_group.rds_one: Refreshing state... (ID: rds_one_db)
aws_db_subnet_group.rds_one: Destroying...
aws_db_subnet_group.rds_one: Destruction complete
aws_subnet.main_east: Destroying...
aws_subnet.other_east: Destroying...
aws_subnet.other_east: Destruction complete
aws_subnet.main_east: Destruction complete
aws_vpc.foo_east: Destroying...
aws_vpc.foo_east: Destruction complete
Apply complete! Resources: 0 added, 0 changed, 4 destroyed.
hello,
I have the same issue.
aws_security_group.sec_db_rancher: Destruction complete
Error applying plan:
1 error(s) occurred:
* aws_db_subnet_group.db_rancher_subnet_group (destroy): 1 error(s) occurred:
* aws_db_subnet_group.db_rancher_subnet_group: InvalidDBSubnetGroupStateFault: Cannot delete the subnet group 'db_rancher_dev_subnet_group' because at least one database instance: terraform-00355d5f13a33ff6e3307b8f3a is still using it.
status code: 400, request id: 62b559bd-67ab-11e7-ad62-0b399f27c7f0
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
indeed, my database takes a moment to be deleted..
terraform -version
Terraform v0.9.11
@catsby I am facing the same issue, is this being tracked under another issue number?
terraform destroy -force -no-color
data.aws_route53_zone.selected: Refreshing state...
data.aws_subnet.s_subnet_2: Refreshing state...
data.aws_subnet.s_priv_1: Refreshing state...
data.aws_vpc.s_vpc: Refreshing state...
data.aws_subnet.s_priv_2: Refreshing state...
data.aws_subnet.s_pubdmz_1: Refreshing state...
data.aws_subnet.s_subnet_1: Refreshing state...
data.aws_subnet.s_pubdmz_2: Refreshing state...
aws_db_subnet_group.default: Refreshing state... (ID: ecom-dedev)
aws_security_group.database: Refreshing state... (ID: sg-b976bbd1)
aws_rds_cluster.db: Refreshing state... (ID: ecom-dedev)
aws_route53_record.dns: Refreshing state... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME)
aws_rds_cluster_instance.db: Refreshing state... (ID: ecom-dedev-0)
aws_route53_record.dns: Destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME)
aws_db_subnet_group.default: Destroying... (ID: ecom-dedev)
aws_rds_cluster_instance.db: Destroying... (ID: ecom-dedev-0)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 10s elapsed)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 20s elapsed)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 30s elapsed)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 40s elapsed)
aws_route53_record.dns: Destruction complete after 47s
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m0s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m0s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m0s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 4m0s elapsed)
aws_rds_cluster_instance.db: Destruction complete after 4m2s
aws_rds_cluster.db: Destroying... (ID: ecom-dedev)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 10s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 20s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 30s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 40s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 50s elapsed)
aws_rds_cluster.db: Destruction complete after 50s
aws_security_group.database: Destroying... (ID: sg-b976bbd1)
aws_security_group.database: Destruction complete after 0s
Error applying plan:
1 error(s) occurred:
aws_db_subnet_group.default (destroy): 1 error(s) occurred:
aws_db_subnet_group.default: InvalidDBSubnetGroupStateFault: Cannot delete the subnet group 'ecom-dedev' because at least one database instance: ecom-dedev-0 is still using it.
status code: 400, request id: 9d0ef1d6-5bfc-4910-9f66-7fb09ccaecb5
When i check the AWS console there are not database instances and the cluster has gone as well.
Any tips on a workaround until this is fixed would be helpful. We are using version Terraform v0.10.7
has this even been resolved? I face the same issue.
tf v.0.11.1
@ntman4real this is being tracked here now
https://github.com/terraform-providers/terraform-provider-aws/issues/118
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@catsby I am facing the same issue, is this being tracked under another issue number?
terraform destroy -force -no-color
data.aws_route53_zone.selected: Refreshing state...
data.aws_subnet.s_subnet_2: Refreshing state...
data.aws_subnet.s_priv_1: Refreshing state...
data.aws_vpc.s_vpc: Refreshing state...
data.aws_subnet.s_priv_2: Refreshing state...
data.aws_subnet.s_pubdmz_1: Refreshing state...
data.aws_subnet.s_subnet_1: Refreshing state...
data.aws_subnet.s_pubdmz_2: Refreshing state...
aws_db_subnet_group.default: Refreshing state... (ID: ecom-dedev)
aws_security_group.database: Refreshing state... (ID: sg-b976bbd1)
aws_rds_cluster.db: Refreshing state... (ID: ecom-dedev)
aws_route53_record.dns: Refreshing state... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME)
aws_rds_cluster_instance.db: Refreshing state... (ID: ecom-dedev-0)
aws_route53_record.dns: Destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME)
aws_db_subnet_group.default: Destroying... (ID: ecom-dedev)
aws_rds_cluster_instance.db: Destroying... (ID: ecom-dedev-0)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 10s elapsed)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 20s elapsed)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 30s elapsed)
aws_route53_record.dns: Still destroying... (ID: Z2VH8H52WY1UZ6_ecom-dedev-db_CNAME, 40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 40s elapsed)
aws_route53_record.dns: Destruction complete after 47s
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m0s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 1m50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m0s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 2m50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m0s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m10s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m20s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m30s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m40s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 3m50s elapsed)
aws_rds_cluster_instance.db: Still destroying... (ID: ecom-dedev-0, 4m0s elapsed)
aws_rds_cluster_instance.db: Destruction complete after 4m2s
aws_rds_cluster.db: Destroying... (ID: ecom-dedev)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 10s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 20s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 30s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 40s elapsed)
aws_rds_cluster.db: Still destroying... (ID: ecom-dedev, 50s elapsed)
aws_rds_cluster.db: Destruction complete after 50s
aws_security_group.database: Destroying... (ID: sg-b976bbd1)
aws_security_group.database: Destruction complete after 0s
Error applying plan:
1 error(s) occurred:
aws_db_subnet_group.default (destroy): 1 error(s) occurred:
aws_db_subnet_group.default: InvalidDBSubnetGroupStateFault: Cannot delete the subnet group 'ecom-dedev' because at least one database instance: ecom-dedev-0 is still using it.
status code: 400, request id: 9d0ef1d6-5bfc-4910-9f66-7fb09ccaecb5
When i check the AWS console there are not database instances and the cluster has gone as well.
Any tips on a workaround until this is fixed would be helpful. We are using version Terraform v0.10.7