I can not migrate to AWS provider 2.x as Terraform want recreate aws_elasticache_cluster resources if I change "availability_zones" (REMOVED param) to "preferred_availability_zones"
How I can migrate to new param without destroying existed resources?
resource "aws_elasticache_cluster" "ecache_cluster" {
...
az_mode = "single-az"
preferred_availability_zones = ["${var.availability_zones[0]}"]
}
change to
resource "aws_elasticache_cluster" "ecache_cluster" {
...
az_mode = "single-az"
preferred_availability_zones = ["${var.availability_zones[0]}"]
}
results:
(forces new resource)
-/+ module.ecache.aws_elasticache_cluster.ecache_cluster (new resource required)
id: "stg01-ecache" => <computed> (forces new resource)
apply_immediately: "true" => "true"
availability_zone: "us-west-2a" => "us-west-2a"
availability_zones.#: "1" => "0" (forces new resource)
availability_zones.2487133097: "us-west-2a" => "" (forces new resource)
Thank you for using Terraform and for opening up this question. Issues on GitHub are intended to be related to bugs or feature requests with the provider codebase. Please use https://discuss.hashicorp.com/c/terraform-providers for community discussions, and questions around Terraform.
If you believe that your issue was miscategorized as a question or closed in error, please create a new issue using one of the following provided templates: bug report or feature request. Please make sure to provide us with the appropriate information so we can best determine how to assist with the given issue.
@tracypholmes seems that "discuss.hashicorp.com" is not monitored by contributors well (nobody answer) :(
and for me this ticket is critical 😞 as we blocked for upgrading to Terraform AWS provider version 2.x because of such strange behaviour (destroy my current elasticache resources)
could you open this discussion and share question with someone?
I run into same issue, I'm trying to migrate from:
resource "aws_elasticache_cluster" "elasticache" {
availability_zones = [ "${var.availability_zones}" ]
}
to:
resource "aws_elasticache_cluster" "elasticache" {
availability_zone = "${var.availability_zones[0]}"
preferred_availability_zones = [ "${var.availability_zones}" ]
}
but plan return an unexpected change:
-/+ module.elasticache.aws_elasticache_cluster.elasticache (new resource required)
id: "elasticache-stage" => <computed> (forces new resource)
apply_immediately: "" => <computed>
availability_zone: "eu-west-1a" => "eu-west-1a"
availability_zones.#: "1" => "0" (forces new resource)
availability_zones.3953592328: "eu-west-1a" => "" (forces new resource))
why is the availability_zones configuration emptied? Is not the same as availability_zones mapped property with a different name?
@azhurbilo in the meantime I found a workaround, try using lifecycle.ignore_changes:
resource "aws_elasticache_cluster" "elasticache" {
availability_zone = "${var.availability_zones[0]}"
preferred_availability_zones = [ "${var.availability_zones}" ]
lifecycle {
ignore_changes = ["availability_zones"]
}
}
plan command now exit with just these changes:
~ module.elasticache.aws_elasticache_cluster.elasticache
preferred_availability_zones.#: "" => "1"
preferred_availability_zones.0: "" => "eu-west-1a"
this will not trigger the availability_zones change and doesn't recreate the resource.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
@azhurbilo in the meantime I found a workaround, try using
lifecycle.ignore_changes:plancommand now exit with just these changes:this will not trigger the
availability_zoneschange and doesn't recreate the resource.