_This issue was originally opened by @anosulchik as hashicorp/terraform#5417. It was migrated here as part of the provider split. The original body of the issue is below._
Hi,
I've noticed that terraform (I'm using 0.6.9) ignores skip_final_snapshot argument for RDS DB instance resource that I need to remove:
resource "aws_db_instance" "db_instance" {
identifier = "tf-test-db"
allocated_storage = "5"
multi_az = "false"
engine = "mysql"
instance_class = "db.t2.small"
username = "admin"
password = "password123"
snapshot_identifier = "rds:xxx-xxx-2016-03-02-09-09"
db_subnet_group_name = "default"
vpc_security_group_ids = [ "sg-0000000" ]
storage_type = "gp2"
skip_final_snapshot = true
}
I'm getting the following error trying to remove DB instance:
Adding final_snapshot_identifier argument doesn't help -- result is the same, terraform fails.
tfstate (sensitive data is removed):
"aws_db_instance.db_instance": {
"type": "aws_db_instance",
"primary": {
"id": "tf-test-db",
"attributes": {
"address": "tf-test-db.xxx.us-east-1.rds.amazonaws.com",
"allocated_storage": "5",
"arn": "arn:aws:rds:us-east-1:xxx:db:tf-test-db",
"auto_minor_version_upgrade": "true",
"availability_zone": "us-east-1e",
"backup_retention_period": "1",
"backup_window": "09:07-09:37",
"copy_tags_to_snapshot": "false",
"db_subnet_group_name": "default",
"endpoint": "tf-test-db.xxx.us-east-1.rds.amazonaws.com:3306",
"engine": "mysql",
"engine_version": "5.6.23",
"id": "tf-test-db",
"instance_class": "db.t2.small",
"license_model": "general-public-license",
"maintenance_window": "sun:04:16-sun:04:46",
"multi_az": "false",
"name": "ebdb",
"parameter_group_name": "default.mysql5.6",
"password": "admin",
"port": "3306",
"replicas.#": "0",
"replicate_source_db": "",
"security_group_names.#": "0",
"status": "available",
"storage_encrypted": "false",
"storage_type": "gp2",
"tags.#": "0",
"username": "admin",
"vpc_security_group_ids.#": "1",
"vpc_security_group_ids.2852096327": "sg-000000"
}
}
}
}
I just hit this issue, and as mentioned in the previous issues I am pretty sure this is a workflow issue.
In my case I was adding the property identifier, which is Optional and Forces new resource.
Having not specified either final_snapshot_identifier or skip_final_snapshot it could not apply and exited with the following error:
Error applying plan:
1 error(s) occurred:
* aws_db_instance.default (destroy): 1 error(s) occurred:
* aws_db_instance.default: DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
Solution was:
final_snapshot_identifier = "${var.customer}-${var.project}-${var.environment}-${md5(timestamp())}"
terraform apply
identifier = "${var.customer}-${var.project}-${var.environment}"
which should force a new resource and destroy the current database instanceterraform apply
Basically you can not add configuration which forces a new resource without specifying either final_snapshot_identifier or set skip_final_snapshot to true.
@houqp from https://github.com/hashicorp/terraform/issues/5417#issuecomment-291695456 might be on to a solution here, throwing an error in the plan and apply-phase if this condition is met.
Similarly to previous comment, you can specify skip_final_snapshot = true
, apply, then destroy.
IMO, terraform should not let you create a database that assumes by default you will create a final snapshot, but also fails to require by default that you specifier the identifier (which AWS requires). I don't care which way it goes, but the defaults seems to conflict out of the gate and the user only finds out (from an AWS error) the first time they go to delete.
OTOH, apparently AWS lets you get into this state and if those are AWS defaults, then it's not really terraform's fault / problem, except maybe to warn the user since it's a know issue that's impacted lots of people. (still just my opinion ;)
Facing the issue with v0.9.8. Its failing though we have passed skip_final_snapshot=true and passing value to final_snapshot_identifier.
faced with this error on 0.10.8
resource definition:
resource "aws_db_instance" "rds_sc_node" {
count = "${var.sc_pg_count}"
identifier = "${var.sc_pg_count == 1 ? format("db-%s-%s", var.env, var.sc_pg_db_name) : format("db-%s-%s-%01d", var.env, var.sc_pg_db_name, count.index + 1)}"
publicly_accessible = "${var.sc_pg_publicly_accessible}"
vpc_security_group_ids = ["${module.security_groups.rds}"]
db_subnet_group_name = "${module.subnet_group.name}"
engine = "${var.sc_pg_engine}"
instance_class = "${var.sc_pg_instance_class}"
storage_type = "${var.sc_pg_storage_type}"
iops = "${var.sc_pg_storage_type == "gp2" ? 0 : var.sc_pg_storage_iops}"
backup_retention_period = "${var.sc_pg_backup_retention_period}"
auto_minor_version_upgrade = "false"
multi_az = "${var.sc_pg_multi_az}"
storage_encrypted = "false"
name = "${replace(var.sc_pg_db_name, "-", "")}"
username = "${var.sc_pg_username}"
password = "${var.sc_pg_password}"
parameter_group_name = "${element(aws_db_parameter_group.rds_sc_node_parameter_group.*.name, count.index)}"
skip_final_snapshot = "true"
snapshot_identifier = "${element(var.sc_pg_snapshot_identifiers, count.index)}"
tags {
Name = "${var.sc_pg_count == 1 ? format("db-%s-%s", var.env, var.sc_pg_db_name) : format("db-%s-%s-%01d", var.env, var.sc_pg_db_name, count.index + 1)}"
}
}
output:
```terraform init && terraform destroy --var-file=vars/vars.tfvars -target aws_db_instance.rds_sc_node
Downloading modules...
Initializing the backend...
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
aws_db_parameter_group.rds_sc_node_parameter_group: Refreshing state... (ID: db-instance-1)
aws_vpc.vpc: Refreshing state... (ID: vpc-90c0d5f8)
aws_subnet.public[2]: Refreshing state... (ID: subnet-b820f4f5)
aws_security_group.rds: Refreshing state... (ID: sg-b560d5df)
aws_db_subnet_group.default: Refreshing state... (ID: frankfurt-1)
aws_db_instance.rds_sc_node: Refreshing state... (ID: db-instance-1)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_db_instance.rds_sc_node: Destroying... (ID:db-instance-1)
Error: Error applying plan:
1 error(s) occurred:
aws_db_instance.rds_sc_node (destroy): 1 error(s) occurred:
aws_db_instance.rds_sc_node: DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.```
UPD1:
trouble still exist even after adding final_snapshot_identifier to resource definition
UPD2:
trouble still exists after:
I've just had this issue in an environment where I cannot delete via the console but my terraform key has full rights so I was stuck with using terraform to get out of this. I had tainted my database for deletion and then spent ages trying to apply various different settings such as "skip_final_snapshot" but you MUST get that setting applied before even considering deletion. So my solution was to untaint the resource, apply the "skip_final_snapshot=true" setting then taint again.
I did not have to edit state.
If you've ever done this via the console, you choose this setting right at the end so it almost feels intuitive that you should be able to add that setting during deletion but it's not really terraform's fault.
Still seeing this in v0.11.7 with aws provider v1.20.0
Workaround I've been able to use:
skip_final_snapshot
to true in tf fileterraform apply
- should update RDS instanceTerraform apply
will delete and re-create (or just use terraform destroy
which also seems to work)I'm running into this today. I've tried the various solutions suggested in this thread, but each change to the db-instance wants to force a new resource, which is blocked by the
To expand a little on what @pmkuny mentioned as workaround, if you're just looking at just doing a terraform destroy
:
skip_final_snapshot = true
to your rds resourceterraform apply -target=aws_db_instance.my-db
terraform destroy
again. It's a little safer that changing your state file by hand.
@pastephens, possibly another workaround, assuming the problem lies with "skip_final_snapshot" not being in the state file, and isn't a setting in AWS RDS:
terraform state rm aws_db_instance.my-db
skip_final_snapshot = true
to your RDS resourceterraform import aws_db_instance.my-db my-db
Most helpful comment
IMO, terraform should not let you create a database that assumes by default you will create a final snapshot, but also fails to require by default that you specifier the identifier (which AWS requires). I don't care which way it goes, but the defaults seems to conflict out of the gate and the user only finds out (from an AWS error) the first time they go to delete.