_This issue was originally opened by @joshdk as hashicorp/terraform#13928. It was migrated here as part of the provider split. The original body of the issue is below._
resource "aws_api_gateway_rest_api" "api" {
name = "Test API"
}
resource "aws_api_gateway_method" "api_method" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "api_integration" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "${aws_api_gateway_method.api_method.http_method}"
type = "MOCK"
}
resource "aws_api_gateway_deployment" "api_deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "development"
depends_on = [
"aws_api_gateway_integration.api_integration",
]
variables = {
deploy_timestamp = "${timestamp()}"
}
}
resource "aws_api_gateway_usage_plan" "api_usage" {
name = "My usage plan"
description = "Unlimited usage plan"
api_stages {
api_id = "${aws_api_gateway_rest_api.api.id}"
stage = "${aws_api_gateway_deployment.api_deployment.stage_name}"
}
}
joshdk/5002e96f954e89b4d720af9c4e799cdb
When the aws_api_gateway_deployment
resource is updated, the aws_api_gateway_usage_plan
resource should continue to be associated with the original api/stage.
After the initial run of terraform apply
, when the aws_api_gateway_deployment
resource is updated, the aws_api_gateway_usage_plan
resource no longer associated with any api/stage. When the aws_api_gateway_deployment
resource is updated again, the aws_api_gateway_usage_plan
resource is once again associated with the original api/stage. This two-state cycle will continue indefinitely as long as you keep running terraform apply
terraform apply
terraform apply
terraform apply
馃槃 Observed behavior is almost identical to hashicorp/terraform hashicorp/terraform#9376, but utilizes a newer resource type.
+1 馃憤
Does anybody know any workaround except calling the aws cli directly?
use a wrapper script or manually:
terraform apply
terraform taint aws_api_gateway_usage_plan.main
terraform apply
Is there a fix that is somewhat more elegant than what is stated above?
The problem with the wrapper script is that between the first apply and the last apply clients will get errors as their API keys won't be linked to usage plans for the deployed stage. I often get concurrent update errors from AWS when doing a lot of usage plan updates, so the time between successful applies can be substantial.
The issue is that the _aws_api_gateway_deployment_ resource is configured to force a new resource on _stage_description_ or _variable_ updates, which will delete and re-create both the stage and the deployment. Updating just _description_ doesn't force a new resource, but only executes an _update-deployment_ to AWS, which doesn't deploy the API, but just sets the _description_ on a past deployment.
What could work is if a new field is introduced that, on change, would execute a _create-deployment_ to AWS (which only re-deploys if the stage already exists). This can then be used to trigger deployments by using a hash or a timestamp, etc. Not sure what such a field could be called, though?
As described by @pjmyburg what terraform does is it "deletes" the stage and deploys a new version.
This causes that the aws_api_gateway_usage_plan get disconnected.
Using variables is not a good idea because it creates a short downtime of stage.
It should only call create deployment.
variables = {
deploy_timestamp = "${timestamp()}"
}
Variables have flag "ForcrNew" https://github.com/terraform-providers/terraform-provider-aws/blob/348917ca6004bdcc3a3db333399c5a27772d8ccb/aws/resource_aws_api_gateway_deployment.go#L49
In my project I am using null_resource combined with api_gateway_deployment.
```
resource "aws_api_gateway_deployment" "api_deployment" {
rest_api_id = "${rest_api_id}"
stage_name = "development"
}
resource "null_resource" "api_deployment" {
depends_on = [
]
triggers {
uuid = "${uuid()}"
}
provisioner "local-exec" {
command = <
EOF
}
}
```
It works well and I did not have any problems with stage settings being disconnected with stage.
In general It would be nice if aws_api_gateway_deployment only deletes stage when "stage" name changes. Rest should be combination of Patch and CreateDeployment operations.
My workaround is to re-create the usage plan with each deployment creation:
resource "aws_api_gateway_usage_plan" "api_usage" {
name = "default-usage-plan-${aws_api_gateway_deployment.api_deployment.id}"
api_stages {
api_id = "${aws_api_gateway_rest_api.api.id}"
stage = "${aws_api_gateway_deployment.api_deployment.stage_name}"
}
}
Edit: no, still fails intermittently.
This seems to be a non issue if we use the "stage" resource:
resource "aws_api_gateway_deployment" "default" {
depends_on = [
"aws_api_gateway_integration.any_integration"
]
rest_api_id = aws_api_gateway_rest_api.public_crud.id
stage_description = "Deployed at ${timestamp()}" # this attribute forces redeployment
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_stage" "default" {
stage_name = "default"
rest_api_id = aws_api_gateway_rest_api.public_crud.id
deployment_id = aws_api_gateway_deployment.default.id
client_certificate_id = var.client_certificate_id
}
resource "aws_api_gateway_usage_plan" "default" {
name = "default ${timestamp()}"
api_stages {
api_id = aws_api_gateway_rest_api.public_crud.id
stage = aws_api_gateway_stage.default.stage_name
}
}
Note that the "deployment" resource has no "stage_name" attribute.
We are experiencing this problem as well. In fact, we ran into the api_gateway bug and solved it (ha) using the timestamp() as stage variable. We made the usage plan dependent on the api_gateway resource but the dependency does not seem to impact the usage plan.
So....
Using the stage resource is not an option as Swagger files are used to define the Stages.
I will attempt the null resource method.
Either way - this bug still exists after 2 years :(
Does anybody know any workaround except calling the aws cli directly?
@vitali-ausianik we've managed to add the following attribute to the plan resource
description = filemd5("${path.module}/api_gateway.tf")
This creates a hash based on the file, so the update of this resource is forced upon the update of the file, we use the same strategy to trigger a deployment for the gateway. Very similar to the timestamp but more elegant in my opinion given that it only forces state refresh upon modifications to the file rather than every time you run an apply
@link https://www.terraform.io/docs/configuration/functions/filemd5.html
Most helpful comment
As described by @pjmyburg what terraform does is it "deletes" the stage and deploys a new version.
This causes that the aws_api_gateway_usage_plan get disconnected.
Using variables is not a good idea because it creates a short downtime of stage.
It should only call create deployment.
Variables have flag "ForcrNew" https://github.com/terraform-providers/terraform-provider-aws/blob/348917ca6004bdcc3a3db333399c5a27772d8ccb/aws/resource_aws_api_gateway_deployment.go#L49
In my project I am using null_resource combined with api_gateway_deployment.
```
resource "aws_api_gateway_deployment" "api_deployment" {
rest_api_id = "${rest_api_id}"
stage_name = "development"
}
resource "null_resource" "api_deployment" {
depends_on = [
]
triggers {
uuid = "${uuid()}"
}
provisioner "local-exec" {
aws apigateway create-deployment --rest-api-id ${rest_api_id} --stage-name development
command = <
EOF
}
}
```
It works well and I did not have any problems with stage settings being disconnected with stage.
In general It would be nice if aws_api_gateway_deployment only deletes stage when "stage" name changes. Rest should be combination of Patch and CreateDeployment operations.