Hi,
I've noticed that terraform (I'm using 0.6.9) ignores skip_final_snapshot argument for RDS DB instance resource that I need to remove:
resource "aws_db_instance" "db_instance" {
identifier = "tf-test-db"
allocated_storage = "5"
multi_az = "false"
engine = "mysql"
instance_class = "db.t2.small"
username = "admin"
password = "password123"
snapshot_identifier = "rds:xxx-xxx-2016-03-02-09-09"
db_subnet_group_name = "default"
vpc_security_group_ids = [ "sg-0000000" ]
storage_type = "gp2"
skip_final_snapshot = true
}
I'm getting the following error trying to remove DB instance:
Adding final_snapshot_identifier argument doesn't help -- result is the same, terraform fails.
tfstate (sensitive data is removed):
"aws_db_instance.db_instance": {
"type": "aws_db_instance",
"primary": {
"id": "tf-test-db",
"attributes": {
"address": "tf-test-db.xxx.us-east-1.rds.amazonaws.com",
"allocated_storage": "5",
"arn": "arn:aws:rds:us-east-1:xxx:db:tf-test-db",
"auto_minor_version_upgrade": "true",
"availability_zone": "us-east-1e",
"backup_retention_period": "1",
"backup_window": "09:07-09:37",
"copy_tags_to_snapshot": "false",
"db_subnet_group_name": "default",
"endpoint": "tf-test-db.xxx.us-east-1.rds.amazonaws.com:3306",
"engine": "mysql",
"engine_version": "5.6.23",
"id": "tf-test-db",
"instance_class": "db.t2.small",
"license_model": "general-public-license",
"maintenance_window": "sun:04:16-sun:04:46",
"multi_az": "false",
"name": "ebdb",
"parameter_group_name": "default.mysql5.6",
"password": "admin",
"port": "3306",
"replicas.#": "0",
"replicate_source_db": "",
"security_group_names.#": "0",
"status": "available",
"storage_encrypted": "false",
"storage_type": "gp2",
"tags.#": "0",
"username": "admin",
"vpc_security_group_ids.#": "1",
"vpc_security_group_ids.2852096327": "sg-000000"
}
}
}
}
I just hit this same bug using v0.6.14
I had to wrestle with it for a whole day yesterday until I upgraded to 0.6.14. Then I blew everything away in AWS by hand, deleted the state and backup files and reran Terraform with this in the rds definition:
snapshot_identifier = "some-snap"
skip_final_snapshot = true
And it finally worked. Aggravating bug though.
I'm also experiencing this issue in 0.6.15. When Terraform successfully creates my RDS stack, it's also able to delete it successfully. When the stack fails to create due to any sort of error, I run into this bug.
I notice that the "skip_final_snapshot" parameter appears in the terraform.tfstate file when the stack creation is successful, but it's missing when the rds instance was created successfully but the stack failed on a later resource.
Looks like terraform destroy
looks for the skip_final_snapshot
option in the state file. After I manually added "skip_final_snapshot": "true"
to the state file, terraform destroy
worked. However I don't like messing with the state file by hand.
Thank you @zihaoyu 👍 , I was able to delete my RDS by manually editing the state file.
Hi all
Is this still an issue? We currently have nightly tests running that ensure snapshots are not created on deletion (to stop resource leaking) - they work as expected
Paul
Hey Friends – I believe this issue has been fixed sine the last version it was reported against (v0.6.15). Please let us know if you're still hitting it!
Hit this issue in v0.7.7 and had to use the workaround described by @zihaoyu above.
Hit this issue in v0.8.1 with PostgreSQL RDS instance on AWS.
I'm reopening this because Sídar López has reportedly encountered the same problem, as reported on Glitter. I've asked him to leave a note here with some details.
It does look like either this didn't actually get fixed or has regressed in the mean time.
Using Terraform v0.8.2
with this configuration:
resource "aws_db_instance" "comp_mysql_rds" {
identifier = "${var.aws_env_name}-${var.identifier}"
allocated_storage = "${var.storage}"
engine = "${var.engine}"
engine_version = "${lookup(var.engine_version, var.engine)}"
instance_class = "${var.instance_class}"
username = "${var.username}"
password = "${var.password}"
vpc_security_group_ids = ["${var.vpc_sg_id}","${var.private_sg_id}"]
db_subnet_group_name = "${aws_db_subnet_group.comp_mysql_db_subn_grp.id}"
multi_az = "${var.rds_is_multi_az}"
publicly_accessible = true
parameter_group_name = "${aws_db_parameter_group.default.name}"
skip_final_snapshot = false
final_snapshot_identifier = "${var.aws_env_name}-final-snapshot-${md5(timestamp())}"
snapshot_identifier = "${var.snapshot_identifier}"
}
It throws this error on:
First run of terraform apply
shows:
* aws_db_instance.comp_mysql_rds: Error modifying DB Instance qa-mydb-rds: InvalidParameterCombination: You cannot move a DB instance with Multi-Az enabled to a VPC
Second run of terraform apply
without changing nothing on the configuration shows:
* aws_db_instance.comp_mysql_rds: DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
Issue is appearing on v0.8.3
as well.
While executing plan
receiving the following:
skip_final_snapshot: "" => "true"
snapshot_identifier: "" => "arn:aws:rds:MY_AMI_ARN" (forces new resource)
Once executing apply
receiving the following:
Error applying plan:
1 error(s) occurred:
* aws_db_instance.postgres: DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
Seen in 0.8.6.
+1 in 0.8.8
+1 in 0.9.0
Still there in 0.9.1
Also the doc is wrong where is says:
final_snapshot_identifier - (Optional) The name of your final DB snapshot when this DB instance is deleted. If omitted, no final snapshot will be made.
If you omit that, and do not specify skip_final_snapshot, then it crashes with:
* aws_db_instance.main: DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
Hi all
Not sure if you noticed the changelog for 0.9 - but, by default - we now TAKE A FINAL SNAPSHOT
Therefore, skip_final_snapshot
now defaults to false. If you want to take a final snapshot, then you will need to specify a final_snapshot_identifier
If you rely on the default and do not specify the final_snapshot_identifier
then you will get the error:
`
DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
Paul
I'm getting this error with 0.9.1
even though I have the skip_final_snapshot
set to false
.
resource "aws_db_instance" "default" {
# have the upper here because either TF or AWS do this internally and without it we end up cycling the resource constantly
name = "${upper(var.environment)}"
allocated_storage = 10
engine = "oracle-se1"
instance_class = "${var.oracle_instance_type}"
identifier = "mydb-${var.environment}"
username = "${var.oracle_username}"
password = "${var.oracle_password}"
db_subnet_group_name = "${aws_db_subnet_group.oracle.name}"
backup_retention_period = "${var.oracle_backup_retention}"
backup_window = "${var.oracle_backup_window}"
license_model = "license-included"
port = "1521"
publicly_accessible = true
snapshot_identifier = "snapshot_name"
skip_final_snapshot = true
tags {
bill_code = "${var.bill_code}"
}
}
The above results in aws_db_instance.default: DB Instance FinalSnapshotIdentifier is required when a final snapshot is required
which seems wrong to me.
Actually seems to be failing still even when I run with the above config and add final_snapshot_identifier
to the setup. So now I don't seem to be able to create my db at all.
I get this error even with
skip_final_snapshot = true
in .9.1
Does anyone have a workaround for this bug? Right now it seems creating databases with the affected settings is impossible.
We can create a database with v0.9.1. We use for aurora:
resource "aws_rds_cluster" "main" {
apply_immediately = true
backup_retention_period = 30
cluster_identifier = "${var.name}-01"
database_name = "${var.name}"
vpc_security_group_ids = ["${concat(list(aws_security_group.database.id), var.peers, list(var.remote))}"]
db_subnet_group_name = "${aws_db_subnet_group.main.id}"
master_username = "${var.credentials["username"]}"
master_password = "${var.credentials["password"]}"
port = "${var.ports["database"]}"
preferred_backup_window = "00:00-00:30"
preferred_maintenance_window = "sat:01:00-sat:01:30"
skip_final_snapshot = false
final_snapshot_identifier = "${var.name}-final"
storage_encrypted = "${var.encrypt}"
# availability_zones = ["${var.zone}"]
# allocated_storage = "${var.storage}"
}
resource "aws_rds_cluster_instance" "cluster_instances" {
apply_immediately = true
cluster_identifier = "${aws_rds_cluster.main.id}"
count = "${var.count}"
db_subnet_group_name = "${aws_db_subnet_group.main.id}"
depends_on = ["aws_security_group.database"]
identifier = "${var.name}-${count.index}"
instance_class = "db.${var.instance}"
publicly_accessible = true
provisioner "local-exec" {
command = "${var.provision} --endpoint \"${aws_rds_cluster.main.endpoint}\" --user \"${var.credentials["username"]}\" --password \"${var.credentials["password"]}\" > provision.txt"
}
}
and this for RDS:
resource "aws_db_instance" "main" {
name = "${var.name}"
identifier = "${var.name}-01"
username = "${var.credentials["username"]}"
password = "${var.credentials["password"]}"
engine = "${var.config["engine"]}"
engine_version = "${var.config["version"]}"
instance_class = "db.${var.instance}"
parameter_group_name = "${var.parameters}"
allocated_storage = "${var.storage}"
storage_encrypted = "${var.encrypt}"
port = "${var.ports["database"]}"
db_subnet_group_name = "${aws_db_subnet_group.main.id}"
vpc_security_group_ids = ["${concat(list(aws_security_group.database.id), var.peers, list(var.remote))}"]
multi_az = "${var.multi_az}"
availability_zone = "${var.zone}"
publicly_accessible = true
backup_retention_period = 14
backup_window = "23:30-00:00"
maintenance_window = "sat:00:00-sat:00:30"
auto_minor_version_upgrade = true
skip_final_snapshot = true
# final_snapshot_identifier = "${var.name}-final-{{timestamp}}"
apply_immediately = true
depends_on = ["aws_security_group.database"]
provisioner "local-exec" {
command = "${var.provision} --endpoint \"${aws_db_instance.main.endpoint}\" --user \"${var.credentials["username"]}\" --password \"${var.credentials["password"]}\" > provision.txt"
}
}
Hit this issue in v0.9.1 and had to use the workaround described by @zihaoyu above.
Terraform v0.9.2, just hit it.
Used the @zihaoyu fix.
I suggest we throw an error when skip_final_snapshot = false
but snapshot_identifier
is missing with the plan and apply command.
EDIT: hit by this bug too with 0.9.2 :(
Just hit this on 0.9.2 as well...
I am wondering if this occurs because ppl are adding skip_final_snapshot = true
to their tf and then running destroy. I bet you have to apply
that setting before it will take effect. My guess.
EDIT: Just read the work-around, which is much the same thing
Def. not a bug. It's a workflow issue. You have to apply the change to the state with terraform to avoid the manual state file update. Run a plan and you will see:
~ aws_redshift_cluster.redshift_dep
skip_final_snapshot: "false" => "true"
That's as maybe @kpeder, but if multiple people hit the same condition over a long timespan, and the traffic on the issue would suggest as such, then at very least a verbose error on the matter might be in order.
@terrafoundry If you think about the logic, terraform is acting consistently. I don't think a special case would be appropriate where terraform warns you that you might be misunderstanding how it works. Conversely, if you are getting the expected behaviour and then you get an warning, then that could lead to confusion too.
It might be best to just make a note on the RDS documentation page that this changed and that it might cause some weird behavior as you move up between versions of Terraform. I can certainly take a shot at writing a helpful blurb for that page if anybody else thinks that would be a helpful solution
@arsdehnel Has this behaviour changed from before? I didn't notice anybody saying that, although I may have missed it.
I think it's for people that had things created when skip_final_snapshot
was defaulting to false and then making a change to that resource having problems with Terraform being able to handle it now that it defaults to true.
Nope. This issue definitely comes up even for new resources created with skip_final_snapshot = true
but that have failed somewhere and need to be deleted. I haven't nailed down exactly how this happens, but I'm clearly seeing this over and over again, even with new resources created on clean states.
+1 on 0.9.4, still a problem.
+1
Just ran into this myself. I can't modify the state file as we're using the remote S3 backend.
Still a problem in 0.9.5, sadly.
as @timstoop mentioned, still an issue on v0.9.5
Hi all
I am not sure what the issue is here - I have the following configuration:
resource "aws_db_instance" "test" {
allocated_storage = 10
engine = "MySQL"
instance_class = "db.t1.micro"
password = "password"
username = "root"
publicly_accessible = true
skip_final_snapshot = true
timeouts {
create = "30m"
}
}
I am able to create and then destroy an AWS RDS instance. If I want to create an instance and then take a final snapshot before deletion, I do the following:
resource "aws_db_instance" "test" {
allocated_storage = 10
engine = "MySQL"
instance_class = "db.t1.micro"
password = "password"
username = "root"
publicly_accessible = true
final_snapshot_identifier = "my-snapshot-name"
timeouts {
create = "30m"
}
}
Can someone confirm if they are doing something different from this? We have a host of DB Instance tests that destroy RDS instances each night - the failure last night is unrelated
Paul
@stack72 create the db instance without specifying values for final_snapshot_identifier
and skip_final_snapshot
. In my case on 0.9.4, I had created a mysql instance. Then, without making any further changes, I tried to destroy the instance. It refused to delete because it still wanted a value for final_snapshot_identifier
IIRC and even when I set skip_final_snapshot=true
it still refused to delete.
I think it was an issue between the state file and then it ignoring any changes made to current active config.
The skip_final_snapshot must be applied first, then the delete must occur in a subsequent apply. It's not a bug, it's work flow. Terraform has no way to know that the modification of the parameter is a dependency for the deletion of the db, and so can't handle the update to skip final snapshot and the delete in one operation. I've tried numerous times with a separate apply to set skip final snapshot properly ahead of the delete and have never encountered an issue when this process is followed.
Sent from my BlackBerry - the most secure mobile device - via the Rogers Network
From: [email protected]
Sent: May 24, 2017 10:46 PM
To: [email protected]
Reply-to: [email protected]
Cc: [email protected]; [email protected]
Subject: Re: [hashicorp/terraform] Terraform ignores skip_final_snapshot so it's impossible to delete rds db instance (#5417)
@stack72https://github.com/stack72 create the db instance without specifying values for final_snapshot_identifier and skip_final_snapshot. In my case on 0.9.4, I had created a mysql instance. Then, without making any further changes, I tried to destroy the instance. It refused to delete because it still wanted a value for skip_final_snapshot IIRC and even when I set skip_final_snapshot=true it still refused to delete.
I think it was an issue between the state file and then it ignoring any changes made to current active config.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/hashicorp/terraform/issues/5417#issuecomment-303907565, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFl1me5QPStNQPxseLap8Ha7FEMlLxAcks5r9OuPgaJpZM4Hnfe6.
I just tried reproducing what I hit on 0.9.4, but i cant (although im also on 0.9.5 now so have no idea if that makes a difference). I did this a few weeks ago so dont have what I did for reference, it is quite possible I didnt apply the changes before attempting to destroy, but at least for me it is working without issue now.
FYI, this was closed because it was moved to a different issue here, not because it was completed or fixed yet
zihaoyu's fix still saving the day even in version v0.10.2!
Terraform v0.10.7 issue's still on
On version v0.10.7
skip_final_snapshot = true
to .tf
file aws_db_instance
resourceresource "aws_db_instance" "mydb" {
.
.
.
skip_final_snapshot = true
}
terraform apply
terraform destroy
Probably the documentation should talk about this scenario
@WarFox Did you even read the issue?
same with v0.10.8
the following approach worked for
skip_final_snapshot = true
which was set to false
previously. Commented out snapshot_identifier = "snapshot_name"
resource "aws_db_instance" "db_instance" {...}
terraform version: v0.10.8
terraform aws provider version: 1.6.0
still an issue in terraform v0.11.5 and aws provider v1.11.0
still an issue with Terraform v0.11.5
Please note that this issue is not closed, it is just _moved_ to terraform-providers/terraform-provider-aws#2588.
Still there with 0.11.7
Hi all,
This issue has been moved to the aws provider repository, and any conversation should continue in the new issue.
Since comments on this issue are generating notifications to subscribers, I am going to lock it and encourage you to view the moved issue.
Please continue to open new issues here for any other Terraform Core problems that you come across, and thanks!
Most helpful comment
Looks like
terraform destroy
looks for theskip_final_snapshot
option in the state file. After I manually added"skip_final_snapshot": "true"
to the state file,terraform destroy
worked. However I don't like messing with the state file by hand.