Terraform-provider-aws: Not possible to upgrade to terraform 0.12 if using resource aws_lb_listener_certificate

Created on 23 May 2019  路  8Comments  路  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

0.12.0

Affected Resource(s)

  • aws_lb_listener_certificate

Terraform Configuration Files

resource "aws_acm_certificate" "example" {
  domain_name       = "${var.uri_prefix}.${var.domain}"
  validation_method = "DNS"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_route53_record" "example_dns_validation" {
  name    = aws_acm_certificate.example.domain_validation_options.0.resource_record_name
  type    = aws_acm_certificate.example.domain_validation_options.0.resource_record_type
  zone_id = data.aws_route53_zone.example
  records = [aws_acm_certificate.example.domain_validation_options.0.resource_record_value]
  ttl     = 300
}

resource "aws_acm_certificate_validation" "example" {
  certificate_arn         = aws_acm_certificate.example.arn
  validation_record_fqdns = [aws_route53_record.example_dns_validation.fqdn]
}

resource "aws_lb_listener_certificate" "example" {
  listener_arn    = data.aws_lb_listener.example.arn
  certificate_arn = aws_acm_certificate.example.arn
  depends_on      = aws_acm_certificate_validation.example
}

Debug Output

Panic Output

Expected Behavior

aws_lb_listener_certificate should have been included in the state file.

Actual Behavior

aws_lb_listener_certificate is not included in the state file, and terraform apply fails with the current error message:

Error: Provider produced inconsistent result after apply

When applying changes to aws_lb_listener_certificate.example, provider
"aws" produced an unexpected new value for was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Steps to Reproduce

  1. Set up the terraform script listed above together with an alb (using terraform >0.12.0)
  2. run terraform apply

Important Factoids

References

  • #7761
servicelbv2 terraform-0.12

Most helpful comment

We have the same problem with _aws_iam_role_policy_attachment_.
It does attach the policy to the role (verified in aws console) but tf fails.

When applying changes to
module.ssm_patch_management.aws_iam_role_policy_attachment.attach_ssm_policy_to_role3,
provider "aws" produced an unexpected new value for was present, but now
absent.

Note that we actually attach 5 policies. It only complains on one.

All 8 comments

We have the same problem with _aws_iam_role_policy_attachment_.
It does attach the policy to the role (verified in aws console) but tf fails.

When applying changes to
module.ssm_patch_management.aws_iam_role_policy_attachment.attach_ssm_policy_to_role3,
provider "aws" produced an unexpected new value for was present, but now
absent.

Note that we actually attach 5 policies. It only complains on one.

We have the same problem with _aws_iam_role_policy_attachment_.
It does attach the policy to the role (verified in aws console) but tf fails.

When applying changes to
module.ssm_patch_management.aws_iam_role_policy_attachment.attach_ssm_policy_to_role3,
provider "aws" produced an unexpected new value for was present, but now
absent.

Note that we actually attach 5 policies. It only complains on one.

I am currently having the same problem - is there any fix or workaround for this?

@neelam-007 In our case it was because we were referencing a deprecated AWS provided policy. Check if the policyies your are attaching are AWS provided and deprecated. If so attach policy is doing very weird.

A temporary workaround is to comment out the aws_lb_listener_certificate resource. As it's not in the state file, terraform wont destroy it.

That would work only if the resource has already been created, and adds manual labour to your deploy. As for creating a new resource, after converting to 0.12, you would get this exception from the start (at which point, you could comment it out and commit again).
Also, in our case we deploy the infrastructure to our dev environment and when our automatic tests pass we go on and deploy it to the next environment, and so on.
Since all deploys use the same terraform specification (switching between terraform workspaces), commenting it out would make it not being created at all in the second environment.
To counter this I would have to use a count 1:0 trick, adding one more environment to the 0 side for each environment it creates, making it very tedious to deploy a new service.

At this point it would make more sense to stop using the resource and make a cli call to create it instead.

I had the same error with aws_iam_role_policy_attachment when attaching an Amazon managed policy to a role. The problem was that the ARN was wrong (developed in Ireland, applying in a China region - aws_partition there is aws-cn and the ARN becomes invalid)

I believe this problem is mostly that the error message is obscure and unrelated. I tried applying the erroneous policy via the aws-cli to see the error it would return and apparently it returns nothing:

velko.ivanov@SOFMBL001 MINGW64 /d/prg/terraform-global/env_alpha_cn/access_points (env_alpha_cn|MERGING)
$ aws iam attach-role-policy --role-name b12tf-alpha-cn-ssh-gate-role --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore --region cn-northwest-1 --profile china

velko.ivanov@SOFMBL001 MINGW64 /d/prg/terraform-global/env_alpha_cn/access_points (env_alpha_cn|MERGING)
$ aws iam attach-role-policy --role-name b12tf-alpha-cn-ssh-gate-role --policy-arn arn:aws-cn:iam::aws:policy/AmazonSSMManagedInstanceCore --region cn-northwest-1 --profile china

velko.ivanov@SOFMBL001 MINGW64 /d/prg/terraform-global/env_alpha_cn/access_points (env_alpha_cn|MERGING)

It behaves the same when the ARN is the right one too and I believe the aws provider's checks after applying get thrown off somehow.

Same experience as @vivanov-dp, except applying an arn:aws:... managed AWS policy in GovCloud which should have been arn:aws-us-gov:....

Using the correct partition fixed the issue and did not display this error.

AWS provider v2.58.0

Finally had time to look into this. Turns out, the error is not per se with persisting to state, but the read function deletes it from state if the profile running terraform doesn't have the elasticloadbalancing:DescribeListenerCertificates permission. There is no error message exept in debug mode, and it just silently removes the resource.

Was this page helpful?
0 / 5 - 0 ratings