_This issue was originally opened by @momania as hashicorp/terraform#15948. It was migrated here as a result of the provider split. The original body of the issue is below._
v0.9.11
[edit]
Just upgraded to v0.10.2 to validate, but problem still exists
I'm using an aws_api_gateway_usage_plan
which applies to multiple stages, defined with the api_stages
config blocks.
So just as the example in the docs here I specify multiple:
resource "aws_api_gateway_usage_plan" "example_api_usage_plan" {
name = "my-usage-plan"
description = "my description"
api_stages {
api_id = "${aws_api_gateway_rest_api.example_api.id}"
stage = "${aws_api_gateway_deployment.example_deployment_dev.stage_name}"
}
api_stages {
api_id = "${aws_api_gateway_rest_api.example_api.id}"
stage = "${aws_api_gateway_deployment.example_deployment_prod.stage_name}"
}
quota_settings {
limit = 1000
offset = 50
period = "WEEK"
}
throttle_settings {
burst_limit = 5
rate_limit = 10
}
}
Starting from a clean environment, terraform will create the following resource:
+ aws_api_gateway_usage_plan.example_api_usage_plan
api_stages.#: "2"
api_stages.0.api_id: "${aws_api_gateway_rest_api.example_api.id}"
api_stages.0.stage: "dev"
api_stages.1.api_id: "${aws_api_gateway_rest_api.example_api.id}"
api_stages.1.stage: "api"
description: "my description"
name: "my-usage-plan"
quota_settings.#: "1"
quota_settings.2630697536.limit: "1000"
quota_settings.2630697536.offset: "50"
quota_settings.2630697536.period: "WEEK"
throttle_settings.#: "1"
throttle_settings.1161842669.burst_limit: "5"
throttle_settings.1161842669.rate_limit: "10"
When all is applied, and after this I run a terraform plan
, it will suggest the following changes:
~ aws_api_gateway_usage_plan.example_api_usage_plan
api_stages.0.stage: "api" => "dev"
api_stages.1.stage: "dev" => "api"
throttle_settings.1161842669.burst_limit: "" => "5"
throttle_settings.1161842669.rate_limit: "" => "10"
throttle_settings.514851065.burst_limit: "5" => "0"
throttle_settings.514851065.rate_limit: "0" => "0"
If I apply this and run terraform plan
again, it suggests the following changes:
~ aws_api_gateway_usage_plan.example_api_usage_plan
api_stages.0.stage: "api" => "dev"
api_stages.1.stage: "dev" => "api"
I expect actually that after the initial apply, and without any changes to the config, terraform will not suggest any changes. After the first run, it leaves the throttle_settings
for what it is, but the api_stages
keep on re-ordering.
We are also observing similar issue. We only have one stage but while applying the same, we see output similar to:
throttle_settings.1161842669.burst_limit: "" => "5"
throttle_settings.1161842669.rate_limit: "" => "10"
throttle_settings.514851065.burst_limit: "5" => "0"
throttle_settings.514851065.rate_limit: "0" => "0"
Question is if we are trying to set limits to "5" and "10" in:
throttle_settings.1161842669.burst_limit: "" => "5"
throttle_settings.1161842669.rate_limit: "" => "10"
, why does terraform try to reset them to 0 in the last 2 lines?
throttle_settings.514851065.burst_limit: "5" => "0"
throttle_settings.514851065.rate_limit: "0" => "0"
And then how does it set values correctly in the second apply run?
Hi all,
Any new regarding the fix of usage plan ordering?
It keeps updating the order everytime i apply it.
Thanks
I got the same issue where I have to run terraform twice every time we released new changes even though that changes were not related to this aws_api_gateway_usage_plan
any updates in the pipeline this year?
@anonymint what I ended up doing to 'fix' this problem was removing the usage plan stages from terraform (but kept the resource so terraform can create the usage plan) and in every API resource (aws_api_gateway_deployment), I have a local-exec like so:
provisioner "local-exec" {
command = "aws apigateway update-usage-plan --usage-plan-id ${aws_api_gateway_usage_plan.externalpartner.id} --patch-operations op=add,path=/apiStages,value=${aws_api_gateway_rest_api.profile-api.id}:${aws_api_gateway_deployment.profile-api.stage_name}"
}
This ensures that everytime the API changes, the usage plan is associated again.
Not sure if it fixes your problem, but its a possible workaround and almost 0 downtime for this usage plan.
@ricardosilveiraolx oh that's a legit idea I end up creating null_resource and apply script above to trigger based on aws_api_gateway_deployment id so far so good to me.
resource "null_resource" "aws_api_gateway_usage_plan" {
triggers = {
api_gateway_id = "${aws_api_gateway_deployment.deployment.id}"
}
provisioner "local-exec" {
command = "aws apigateway update-usage-plan --usage-plan-id ${aws_api_gateway_usage_plan.limit_plan.id} --patch-operations op=add,path=/apiStages,value=${aws_api_gateway_rest_api.api.id}:${aws_api_gateway_deployment.deployment.stage_name}"
}
}
This ^ would keep me good for now, but only downside of this approach is our machine need to have aws command install and passing correct aws profiles during execution.
I am getting hit by this bug also. A partial workaround is to use depends_on
to ensure that the APIs are fully created (including the implicit creation of the API stage through the aws_api_gateway_deployment
resource) before the usage plan is provisioned. We deploy APIs through a module, and I was able to reproduce the bug with this:
resource "aws_api_gateway_usage_plan" "bugged" {
name = "bugged"
api_stages {
api_id = "${module.api1.id}"
stage = "test"
}
api_stages {
api_id = "${module.api2.id}"
stage = "test"
}
}
By adding the stanza depends_on = ["module.api1", "module.api2"]
in the above resource, the bug will not occur. _However_, this is only for the initial apply. Adding the dependencies after the fact does not resolve the issue.
In my context, I have 2 APIs (v1 & v2) with one stage for each, associated with multiple usage plans.
It seems the issue is at the API Gateway Service level. I managed to replicate the behaviour with AWS CLI, Go SDK and Terraform. On usage plan query, the api stages list is ordered alphabetically by api id, no matter the association order.
The solution in my case was to make the association, in an order based on api id:
locals {
api_stages_map = {
"${module.apigw_v1.api_id}" = {
api_id = module.apigw_v1.api_id
stage_name = module.apigw_v1.stage_name
},
"${module.apigw_v2.api_id}" = {
api_id = module.apigw_v2.api_id
stage_name = module.apigw_v2.stage_name
}
}
api_stages_list = [for k in sort(keys(local.api_stages_map)) : local.api_stages_map[k]]
}
resource "aws_api_gateway_usage_plan" "bronze" {
...
dynamic "api_stages" {
for_each = local.api_stages_list
content {
api_id = api_stages.value["api_id"]
stage = api_stages.value["stage_name"]
}
}
}
I haven't tested, but is possible that when there is one api with multiple stages, the api stages list received on usage plan query to be ordered not only by api id but also by stage name.
This has been released in version 3.13.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
Most helpful comment
Hi all,
Any new regarding the fix of usage plan ordering?
It keeps updating the order everytime i apply it.
Thanks