Not sure if this is really intended by design, but using the AWS console you can remove or add cluster members without the need of recreating the whole cluster + replication group.
Terraform v0.7.1
Just reduce or increase the number of cluster members to the specified one.
This may be a problem just because you need automatic_failover_enabled = true for this to work.
I guess there is also an edge case if you specify number_cache_clusters = 1, because it depends on how the multiple-az mode is setup and automatic_failover_enabled should be then false AFAIK.
The redis replication group and cluster were deleted and created anew.
resource "aws_elasticache_replication_group" "redis_replication_group" {
replication_group_id = "${var.replication_group_id}"
replication_group_description = "${var.replication_group_id} redis replication group"
node_type = "${var.node_type}"
number_cache_clusters = 4
port = "${var.port}"
automatic_failover_enabled = "${data.template_file.automatic_failover_enabled.rendered}"
engine_version = "${var.engine_version}"
parameter_group_name = "${var.parameter_group_name}"
subnet_group_name = "${aws_elasticache_subnet_group.subnet_group.name}"
security_group_ids = [ "${module.securitygroup.id}" ]
maintenance_window = "${var.maintenance_window}"
apply_immediately = "${var.apply_immediately}"
}
terraform apply (will create the cluster)number_cache_clusters to another value, like 2.resource "aws_elasticache_replication_group" "redis_replication_group" {
replication_group_id = "${var.replication_group_id}"
replication_group_description = "${var.replication_group_id} redis replication group"
node_type = "${var.node_type}"
number_cache_clusters = 2
port = "${var.port}"
automatic_failover_enabled = "${data.template_file.automatic_failover_enabled.rendered}"
engine_version = "${var.engine_version}"
parameter_group_name = "${var.parameter_group_name}"
subnet_group_name = "${aws_elasticache_subnet_group.subnet_group.name}"
security_group_ids = [ "${module.securitygroup.id}" ]
maintenance_window = "${var.maintenance_window}"
apply_immediately = "${var.apply_immediately}"
}
terraform plan will tell us that the cluster will be destroyed, which actually occurs when executing terraform apply-/+ module.redis.aws_elasticache_replication_group.redis_replication_group
apply_immediately: "true" => "true"
automatic_failover_enabled: "true" => "true"
engine: "redis" => "redis"
engine_version: "2.8.24" => "2.8.24"
maintenance_window: "tue:09:00-tue:10:30" => "tue:09:00-tue:10:30"
node_type: "cache.m3.medium" => "cache.m3.medium"
number_cache_clusters: "4" => "2" (forces new resource)
parameter_group_name: "default.redis2.8" => "default.redis2.8"
port: "6379" => "6379"
replication_group_description: "devops-1 replication group" => "devops-1 redis replication group"
replication_group_id: "devops-1" => "devops-1"
security_group_ids.#: "1" => "1"
security_group_ids.3159527089: "sg-2b74d751" => "sg-2b74d751"
security_group_names.#: "0" => "<computed>"
snapshot_window: "07:30-08:30" => "<computed>"
subnet_group_name: "zgi-us-vir-devops-1-redeem-sng" => "zgi-us-vir-devops-1-redeem-sng"
Running in VPC mode.
Hi @martin-flaregames
this is what was added for the first set of functionality. I tried to make this explicit in the documentation:
I will continue to make this resource better going forward - we needed it working as a first pass while i evaluate all of the use cases
Paul
Woops, my fault. Thanks for pointing this out. I really read that documentation but failed to notice the remark. You may close this ticket if you wish.
Edit: And thank you very much for this module. It is a life and time saver!
@martin-flaregames nope - your ticket is a correct request for enhancement :) We should keep it open to make sure the work gets added
Guys, this would be super-useful.
Any update on this?
When it would be done? I really wait for this.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@martin-flaregames nope - your ticket is a correct request for enhancement :) We should keep it open to make sure the work gets added