⇒ terraform -v
Terraform v0.12.20
I want to update a parameter value, my plan look like :
Terraform will perform the following actions:
# aws_rds_cluster_parameter_group.default will be updated in-place
~ resource "aws_rds_cluster_parameter_group" "default" {
arn = "..."
description = "Managed by Terraform"
family = "aurora-mysql5.7"
id = "..."
name = "..."
- parameter {
- apply_method = "immediate" -> null
- name = "max_connections" -> null
- value = "1000" -> null
}
+ parameter {
+ apply_method = "immediate"
+ name = "max_connections"
+ value = "10000"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
but when I apply, it change the max_connections to 10000 and immediatly after do a reset on it,
restoring it the aws default (GREATEST({log(DBInstanceClassMemory/805306368)45},{log(DBInstanceClassMemory/8187281408)1000}) )
When the apply is run TF_LOG=debug, I see it doing the modify :
2020/01/31 18:21:01 [DEBUG] [aws-sdk-go] DEBUG: Request rds/ModifyDBClusterParameterGroup Details:
and then immediatly after looping until it can do a reset :
2020/01/31 18:21:01 [DEBUG] [aws-sdk-go] DEBUG: Request rds/ResetDBClusterParameterGroup Details:
I see it also in the event log :
Parameter groups | Fri Jan 31 18:24:02 GMT-500 2020 | Updated parameter max_connections to GREATEST({log(DBInstanceClassMemory/805306368)45},{log(DBInstanceClassMemory/8187281408)1000}) with apply method immediate
Parameter groups | Fri Jan 31 18:22:17 GMT-500 2020 | Updated parameter max_connections to 10000 with apply method immediate
If I run a plan again, I have :
Terraform will perform the following actions:
# aws_rds_cluster_parameter_group.default will be updated in-place
~ resource "aws_rds_cluster_parameter_group" "default" {
arn = "..."
description = "Managed by Terraform"
family = "aurora-mysql5.7"
id = "..."
name = "..."
+ parameter {
+ apply_method = "immediate"
+ name = "max_connections"
+ value = "10000"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
And a apply will work as it only do modifyDBClusterParameterGroup, but no reset after.
I expect my max_connections changed to 10000
it change max_connections to 10000 and then reset it to the aws default formula.
Changing a parameter value in the HCL script and ruin plan/apply.
My HCL script is very very basic :
resource "aws_rds_cluster_parameter_group" "default" {
name = "..."
family = "aurora-mysql5.7"
parameter {
name = "max_connections"
value = "10000"
}
tags = var.tags
}
I've replicated this issue with a regular, non-clustered parameter group. Type of aws_db_parameter_group
My plan before it was applied showed me what I expected:
Terraform will perform the following actions:
# module.parameter_group_v11.aws_db_parameter_group.parameter_group will be updated in-place
~ resource "aws_db_parameter_group" "parameter_group" {
...
+ parameter {
+ apply_method = "immediate"
+ name = "maintenance_work_mem"
+ value = "125000"
}
- parameter {
- apply_method = "immediate" -> null
- name = "maintenance_work_mem" -> null
- value = "500000" -> null
}
...
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.parameter_group_v11.aws_db_parameter_group.parameter_group: Modifying... [id=...]
module.parameter_group_v11.aws_db_parameter_group.parameter_group: Modifications complete after 8s [id=...]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
When I turned on debugging and checked the log files, I noticed it was resetting the value after modifying.
15295 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: 2020/02/14 15:32:29 [DEBUG] Modify DB Parameter Group: {
15296 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: DBParameterGroupName: "...",
15297 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: Parameters: [{
15301 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: },{
15302 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: ApplyMethod: "immediate",
15303 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: ParameterName: "maintenance_work_mem",
15304 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: ParameterValue: "125000"
15305 2020-02-14T15:32:29.530+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: }]
...
15340 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: 2020/02/14 15:32:30 [DEBUG] Reset DB Parameter Group: {
15341 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: DBParameterGroupName: "...",
15342 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: Parameters: [{
15343 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: ApplyMethod: "immediate",
15344 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: ParameterName: "maintenance_work_mem",
15345 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: ParameterValue: "500000"
15346 2020-02-14T15:32:30.755+1100 [DEBUG] plugin.terraform-provider-aws_v2.49.0_x4: }],
When I do a plan after this occurs, I receive the Amazon default value:
+ parameter {
+ apply_method = "immediate"
+ name = "maintenance_work_mem"
+ value = "125000"
}
- parameter {
- apply_method = "immediate" -> null
- name = "maintenance_work_mem" -> null
- value = "greatest({dbinstanceclassmemory*1024/63963136},65536)" -> null
}
Potentially related to this PR causing a reset on the parameter after it is modified?
This bug was introduced by #11540 and relates to the way parameters are represented in the AWS SDK.
At a high level, resourceAwsDbParameterGroupUpdate()
currently looks like this:
If any parameter is changing:
Modify parameters that are being added or changed
Reset parameters that are being removed
This logic would be valid if each parameter was identified by its name alone. However, Parameter
is a struct containing (among other fields) ParameterName
, ParameterValue
, and ApplyMethod
. This means a single real parameter can be present in both sets: those to be added and those to be removed. An example is the above diff:
+ parameter {
+ apply_method = "immediate"
+ name = "maintenance_work_mem"
+ value = "125000"
}
- parameter {
- apply_method = "immediate" -> null
- name = "maintenance_work_mem" -> null
- value = "500000" -> null
}
Clearly what we want to happen here is a change to the value of a single parameter, maintenance_work_mem
. Internally, though, this is represented as adding a new parameter (maintenance_work_mem=125000
) and removing an existing parameter (maintenance_work_mem=500000
). Because resets happen after modifications, the reset always "wins". Subsequent plans will again represent the change as adding and removing parameter=value combinations, so multiple rounds of plan/apply will never converge.
I'm not sure of the best way to fix the issue. The problem seems to some extent inherent to the design of this resource.
Assuming we want to keep the functionality of resetting parameters that are being removed, one extremely simple fix would be to swap the order of the modify and reset operations so that modify always "wins". That's still not ideal as the operations would happen non-atomically, meaning every changing parameter would get the wrong value for a brief period.
A real fix would be to calculate the difference between the old and new parameter set based on parameter name, ignoring value and any other attributes. Only parameters that are being removed entirely should be reset.
Thanks much for the triage, @lachlancooper. I put up #12112 with a potential fix for this issue.
We have a customer blocked by this issue,
Is there a way we can escalate support for this issue and help it get the related MR/PR merged and released?
Is there anything we can do as an AWS partner and customers with AWS enterprise support to jump on this?
Would be interested to know how the community reacts to an SLA standpoint
Arrived here after inspecting a full TRACE and seeing the modify and reset of the value, in my case:
- parameter {
- apply_method = "immediate" -> null
- name = "long_query_time" -> null
}
+ parameter {
+ apply_method = "immediate"
+ name = "long_query_time"
+ value = "1"
}
And it's still happening with:
@camlow325 is there any way I could quickly test #12112? Any build? To confirm it working or not if needed.
Do we know what's the ETA for this https://github.com/terraform-providers/terraform-provider-aws/pull/12112?
Do we know what's the ETA for this ?
As a workaround I added a sha1
of the parameter values to the name of the parameter group.
Together with create_before_destroy
, changes to the parameter values result in a new parameter group and the destruction of the old one.
resource "aws_db_parameter_group" "mdbf_db" {
name = "${var.environment_name}-${sha1("${var.log_statement}-${var.slow_query_threshold_ms}-${var.log_retention_days}")}"
description = "Enable/disable postgres logging, including slow query logging (limit retention to 7 days)."
family = "${var.parameter_group_family}"
parameter {
name = "log_statement"
value = "${var.log_statement}"
apply_method = "immediate"
}
parameter {
name = "log_min_duration_statement"
value = "${var.slow_query_threshold_ms}"
apply_method = "immediate"
}
parameter {
name = "rds.log_retention_period"
value = "${var.log_retention_days}"
apply_method = "immediate"
}
lifecycle {
create_before_destroy = true #make sure the parameter group is first created
}
}
I think it would be good to have an increased priority on pushing this forward from the core team.
This regression effectively bricks the ability to make configuration changes to databases under RDS unless you're willing and able to use the workaround that @KurtPattyn kindly suggested.
For many of us working in large complex production environments, it's not a simple matter of just tweaking the names. This workaround will cause the entire parameter group to be marked as changed on every terraform run. The additional workload for deployment teams to see what has actually changed across dozens of databases so they can evaluate the safety/risk of the runs will quickly become unmanageable in pipelined environments with a high deployment cadence.
m2c
Completely agree with the previous comment. Also for companies that maintain 30+ environments, it's very difficult to maintain a healthy and manageable deployment process.
This fix for this bug has been merged and will release with v3.3.0
of the Terraform AWS Provider, likely out this Thursday.
This has been released in version 3.3.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
This bug was introduced by #11540 and relates to the way parameters are represented in the AWS SDK.
At a high level,
resourceAwsDbParameterGroupUpdate()
currently looks like this:This logic would be valid if each parameter was identified by its name alone. However,
Parameter
is a struct containing (among other fields)ParameterName
,ParameterValue
, andApplyMethod
. This means a single real parameter can be present in both sets: those to be added and those to be removed. An example is the above diff:Clearly what we want to happen here is a change to the value of a single parameter,
maintenance_work_mem
. Internally, though, this is represented as adding a new parameter (maintenance_work_mem=125000
) and removing an existing parameter (maintenance_work_mem=500000
). Because resets happen after modifications, the reset always "wins". Subsequent plans will again represent the change as adding and removing parameter=value combinations, so multiple rounds of plan/apply will never converge.I'm not sure of the best way to fix the issue. The problem seems to some extent inherent to the design of this resource.
Assuming we want to keep the functionality of resetting parameters that are being removed, one extremely simple fix would be to swap the order of the modify and reset operations so that modify always "wins". That's still not ideal as the operations would happen non-atomically, meaning every changing parameter would get the wrong value for a brief period.
A real fix would be to calculate the difference between the old and new parameter set based on parameter name, ignoring value and any other attributes. Only parameters that are being removed entirely should be reset.