When importing an aws_elasticache_replication_group resource with defined availability_zones, the availability zones are ignored, which results in Terraform attempting to replace the resource on the next apply, which defeats the purpose of the import.
$ terraform -v
Terraform v0.12.20
+ provider.aws v2.47.0
+ provider.random v2.2.1
+ provider.template v2.1.2
+ provider.tls v2.1.1
variable "azs" {
type = list(string)
}
variable "cluster_id" {}
variable "enable_auth" {
default = true
}
variable "environment" {}
variable "instance_type" {}
variable "name" {}
variable "region" {}
variable "source_sg_id_count" {}
variable "source_sg_ids" {
type = list(string)
}
variable "subnet_ids" {
type = list(string)
}
variable "vpc_id" {}
resource "aws_security_group" "redis" {
name = "${var.name}-${var.cluster_id}-${var.environment}-${var.region}"
description = "${var.name}-${var.cluster_id}-${var.environment}-${var.region}"
vpc_id = var.vpc_id
tags = {
Name = "${var.name}-${var.cluster_id}-${var.environment}-${var.region}"
}
}
resource "aws_security_group_rule" "redis-allow-ingress-tcp-6379" {
count = var.source_sg_id_count
protocol = "tcp"
type = "ingress"
source_security_group_id = var.source_sg_ids[count.index]
from_port = 6379
to_port = 6379
security_group_id = aws_security_group.redis.id
}
resource "aws_elasticache_subnet_group" "redis-subnet-group" {
name = "${var.name}-${var.cluster_id}-subnet-group-${var.environment}"
subnet_ids = var.subnet_ids
}
resource "aws_elasticache_replication_group" "redis-replication-group" {
at_rest_encryption_enabled = true
auth_token = var.enable_auth ? "abcdefghijklmnopqrstuvwxyz" : null
automatic_failover_enabled = true
auto_minor_version_upgrade = true
availability_zones = var.azs
maintenance_window = "sun:05:00-sun:09:00"
node_type = var.instance_type
number_cache_clusters = 3
parameter_group_name = "default.redis5.0"
port = 6379
replication_group_description = "rds-${var.name}-${var.cluster_id}-${substr(var.environment, 0, 1)}"
replication_group_id = "rds-${var.name}-${var.cluster_id}-${substr(var.environment, 0, 1)}"
subnet_group_name = aws_elasticache_subnet_group.redis-subnet-group.name
security_group_ids = [aws_security_group.redis.id]
snapshot_retention_limit = 7
snapshot_window = "01:00-05:00"
transit_encryption_enabled = var.enable_auth ? true : false
lifecycle {
ignore_changes = [auth_token, number_cache_clusters]
}
tags = {
Application = "redis-${var.name}"
ClusterId = var.cluster_id
Environment = var.environment
Name = "redis-${var.name}-${var.cluster_id}-${var.environment}"
}
}
$ terraform import -var "region=us-east-1" aws_elasticache_replication_group.redis-replication-group rds-xxxx-a-s
...
$ terraform state show aws_elasticache_replication_group.redis-replication-group
# aws_elasticache_replication_group.redis-replication-group:
resource "aws_elasticache_replication_group" "redis-replication-group" {
at_rest_encryption_enabled = true
auto_minor_version_upgrade = true
automatic_failover_enabled = true
engine = "redis"
engine_version = "5.0.6"
id = "rds-xxxx-a-s"
maintenance_window = "sun:05:00-sun:09:00"
member_clusters = [
"rds-xxxx-a-s-001",
"rds-xxxx-a-s-002",
"rds-xxxx-a-s-003",
]
node_type = "cache.t3.medium"
number_cache_clusters = 3
parameter_group_name = "default.redis5.0"
port = 6379
primary_endpoint_address = "master.rds-xxxx-a-s.xxxx.use1.cache.amazonaws.com"
replication_group_description = "rds-xxxx-a-s"
replication_group_id = "rds-xxxx-a-s"
security_group_ids = [
"sg-xxxx",
]
security_group_names = []
snapshot_retention_limit = 7
snapshot_window = "01:00-05:00"
subnet_group_name = "xxxx-a-subnet-group-staging"
transit_encryption_enabled = true
timeouts {}
}
The state entry should show the availability zones from the resource.
# aws_elasticache_replication_group.redis-replication-group must be replaced
+/- resource "aws_elasticache_replication_group" "redis-replication-group" {
+ apply_immediately = (known after apply)
at_rest_encryption_enabled = true
+ auth_token = (sensitive value)
auto_minor_version_upgrade = true
automatic_failover_enabled = true
+ availability_zones = [
+ "us-east-1a",
+ "us-east-1c",
+ "us-east-1d",
] # forces replacement
+ configuration_endpoint_address = (known after apply)
engine = "redis"
~ engine_version = "5.0.6" -> (known after apply)
~ id = "rds-xxxx-a-s" -> (known after apply)
maintenance_window = "sun:05:00-sun:09:00"
~ member_clusters = [
- "rds-xxxx-a-s-001",
- "rds-xxxx-a-s-002",
- "rds-xxxx-a-s-003",
] -> (known after apply)
node_type = "cache.t3.medium"
number_cache_clusters = 3
parameter_group_name = "default.redis5.0"
port = 6379
~ primary_endpoint_address = "master.rds-xxxx-a-s.xxxx.use1.cache.amazonaws.com" -> (known after apply)
replication_group_description = "rds-xxxx-a-s"
replication_group_id = "rds-xxxx-a-s"
security_group_ids = [
"sg-xxxx",
]
~ security_group_names = [] -> (known after apply)
snapshot_retention_limit = 7
snapshot_window = "01:00-05:00"
subnet_group_name = "xxxx-a-subnet-group-staging"
+ tags = {
+ "Application" = "redis-xxxx"
+ "ClusterId" = "a"
+ "Environment" = "staging"
+ "Name" = "redis-xxxx-a-staging"
}
transit_encryption_enabled = true
+ cluster_mode {
+ num_node_groups = (known after apply)
+ replicas_per_node_group = (known after apply)
}
- timeouts {}
}
We're simply unable to use Terraform for AWS ElastiCache with this bug, unless anyone is able to find a workaround 馃槙
I'm also experiencing the bug after importing some elasticache replication group.
@reedflinch, even if it is not the best practice and should be done with caution, you can still edit your state file manually.
In your terraform state file, search and find your aws_elasticache_replication_group object. You will probably see the availability zone property set to null like:
"availability_zones": null,
You can just replace it by your availability zones list, for example:
"availability_zones": [
"eu-west-3b",
"eu-west-3c"
]
After that, terraform apply command seems to avoid destroying and replacing the object.
Terraform:
"terraform_version": "0.12.23"
Provider:
provider "aws" {
version = "2.61.0"
}
Most helpful comment
I'm also experiencing the bug after importing some elasticache replication group.
@reedflinch, even if it is not the best practice and should be done with caution, you can still edit your state file manually.
In your terraform state file, search and find your aws_elasticache_replication_group object. You will probably see the availability zone property set to null like:
You can just replace it by your availability zones list, for example:
After that,
terraform applycommand seems to avoid destroying and replacing the object.Terraform:
Provider: