As of the sdk release 1.8.42 Dynamodb is now supported by application autoscaling.
This is an addition to the appautoscaling_policy to add the TargetTrackingScalingPolicyConfiguration
SDK Link
http://docs.aws.amazon.com/sdk-for-go/api/service/applicationautoscaling/#PutScalingPolicyInput
Question
I started to investigate this in the providers but noticed the format for aws_appautoscaling_policy
has the keys for StepScalingPolicyConfiguration
on the top level. Is this correct? I would have expected the format for this to be:
resource "aws_appautoscaling_policy" "ecs_policy" {
name = "scale-down"
resource_id = "service/clusterName/serviceName"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"
step_adjustment {
metric_interval_upper_bound = 0
scaling_adjustment = -1
}
}
...
}
Does HCL not support nested objects as shown in this example? I can't recall an example where I have seen it
If this is the case, would the target_tracking_scaling_policy_configuration
format be top level as follows:
resource "aws_appautoscaling_policy" "ecs_policy" {
name = "scale-down"
resource_id = "service/clusterName/serviceName"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
depends_on = ["aws_appautoscaling_target.ecs_target"]
// step_scaling_policy_configuration
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"
step_adjustment {
metric_interval_upper_bound = 0
scaling_adjustment = -1
}
//target_tracking_scaling_policy_configuration
customized_metric_specification = {
dimensions = []
metric_name = "foo"
namespace = "dyn"
statistic = "Average | Minimum | Maximum | SampleCount | Sum"
unit = 1
}
predefined_metric_specification = {
PredefinedMetricType = "DynamoDBReadCapacityUtilization | DynamoDBWriteCapacityUtilization"
ResourceLabel = "..."
}
scale_in_cooldown = 10
scale_out_cooldown = 10
target_value = 50.0
}
Hi there, any update on this PR? :)
Any progress on this? Auto scaling would be really nice
+1
+1
:+1:
Imho @stephencoe, I would use a different approach as I submitted here. If we want to keep the aws_appautoscaling_policy
and aws_appautoscaling_target
there will be a huge confusion to have the same command/resource to manage two (or more) different type of resource (ECS service, DynamoDB). But I know, coding standards, align with AWS SDK...
My short term work around is to use a local provisioner to call the CLI once the table has been created.
Below is the code for the workaround:
variable "aws_region" {
default="us-east-1"
}
resource "aws_dynamodb_table" "DynamoTableName" {
.
.
.
provisioner "local-exec" {
command = "
aws application-autoscaling register-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --min-capacity 1 --max-capacity 10 --role-arn arn:aws:iam::<ACCOUNT-ID>:role/service-role/<ROLE-NAME> --region ${var.aws_region}; \
aws application-autoscaling register-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --min-capacity 1 --max-capacity 10 --role-arn arn:aws:iam::<ACCOUNT-ID>:role/service-role/<ROLE-NAME> --region ${var.aws_region}; \
aws application-autoscaling put-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --policy-name \"Write-${aws_dynamodb_table.DynamoTableName.id}\" --policy-type \"TargetTrackingScaling\" --target-tracking-scaling-policy-configuration '{\"PredefinedMetricSpecification\":{\"PredefinedMetricType\":\"DynamoDBWriteCapacityUtilization\"},\"ScaleOutCooldown\":60,\"ScaleInCooldown\":60,\"TargetValue\":50}' --region ${var.aws_region}; \
aws application-autoscaling put-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --policy-name \"Read-${aws_dynamodb_table.DynamoTableName.id}\" --policy-type \"TargetTrackingScaling\" --target-tracking-scaling-policy-configuration '{\"PredefinedMetricSpecification\":{\"PredefinedMetricType\":\"DynamoDBReadCapacityUtilization\"},\"ScaleOutCooldown\":60,\"ScaleInCooldown\":60,\"TargetValue\":50}' --region ${var.aws_region}"
}
provisioner "local-exec" {
when = "destroy"
command = "
aws application-autoscaling deregister-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --region ${var.aws_region};
aws application-autoscaling deregister-scalable-target --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --region ${var.aws_region};
aws application-autoscaling delete-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:WriteCapacityUnits\" --policy-name \"Write-${aws_dynamodb_table.DynamoTableName.id}\" --region ${var.aws_region}; \
aws application-autoscaling delete-scaling-policy --service-namespace dynamodb --resource-id \"table/${aws_dynamodb_table.DynamoTableName.id}\" --scalable-dimension \"dynamodb:table:ReadCapacityUnits\" --policy-name \"Read-${aws_dynamodb_table.DynamoTableName.id}\" --region ${var.aws_region}"
}
}
UPDATE:
1) Without the local-exec on destroy then the cloudwatch alarms would be orphaned
2) When creating large number of dynamo tables with this method you have to reduce the number of parallel threads to <3 otherwise the creation of cloudwatch alarms for the auto scaling policy will be throttled
+1
I am confused.
If I have a dynamodb table with initially created read/write though put as 10 , later I would like to both read/write capacity as 1000, how can we do this in terraform (without using local provisioner to call the CLI)
The way we are doing this right now is:
resource "aws_dynamodb_table" "table_lab_process" {
name = "lab-${var.environment}-process"
read_capacity = "${var.process_dynamo_rcu_low}"
write_capacity = "${var.process_dynamo_wcu_high}"
hash_key = "gid-brand"
range_key = "ts"
stream_enabled = true
stream_view_type = "NEW_IMAGE"
attribute {
name = "gid-brand"
type = "S"
}
attribute {
name = "ts"
type = "S"
}
lifecycle {
ignore_changes = [
"read_capacity",
"write_capacity",
]
}
}
This way, TF won't complain when the read/write capacity changes on AWS because of autoscale, but we can also change the variables and set it up again. Is that more clear now?
We have autoscaling disabled for dynamodb tables.
Initially we create tables with some read/write throughput through TF and then “we want to use native TF code to update read/write capacity without using aws cli commands in TF”. Any thoughts of using Native TF code to update read/write capacity ??..
Appreciate any responses.
From: Antonio Terreno [mailto:[email protected]]
Sent: Wednesday, November 1, 2017 6:11 AM
To: terraform-providers/terraform-provider-aws terraform-provider-aws@noreply.github.com
Cc: Amar Avula amar.avula@slalom.com; Comment comment@noreply.github.com
Subject: Re: [terraform-providers/terraform-provider-aws] Add support for scaling DynamoDB in Application Scaling Policy (#888)
The way we are doing this right now is:
resource "aws_dynamodb_table" "table_lab_process" {
name = "lab-${var.environment}-process"
read_capacity = "${var.process_dynamo_rcu_low}"
write_capacity = "${var.process_dynamo_wcu_high}"
hash_key = "gid-brand"
range_key = "ts"
stream_enabled = true
stream_view_type = "NEW_IMAGE"
attribute {
name = "gid-brand"
type = "S"
}
attribute {
name = "ts"
type = "S"
}
lifecycle {
ignore_changes = [
"read_capacity",
"write_capacity",
]
}
}
This way, TF won't complain when the read/write capacity changes on AWS because of autoscale, but we can also change the variables and set it up again. Is that more clear now?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_terraform-2Dproviders_terraform-2Dprovider-2Daws_issues_888-23issuecomment-2D341061696&d=DwMFaQ&c=fa_WZs7nNMvOIDyLmzi2sMVHyyC4hN9WQl29lWJQ5Y4&r=QMr6-i7LEGO_dUFVUBEchbekrJ18EYpzG1HYHwXaqYk&m=pomBqA0eNmvKBq8enOHO1K_9L43dXT-VmWTtlX9l2-w&s=iPuSfJdE1Tsi9sn21EAopZqTrV0Dt3nYV-WBaeeRreE&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_Ad3UGwwky0oeF4xS5tnPID0BaM0g8nSfks5syEO3gaJpZM4N8BAs&d=DwMFaQ&c=fa_WZs7nNMvOIDyLmzi2sMVHyyC4hN9WQl29lWJQ5Y4&r=QMr6-i7LEGO_dUFVUBEchbekrJ18EYpzG1HYHwXaqYk&m=pomBqA0eNmvKBq8enOHO1K_9L43dXT-VmWTtlX9l2-w&s=Sgw7x5HlWY-H98ABXN-yWwrgnZbFo8tB4TTWxUMqKZg&e=.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Hi there, any update on this PR? :)