Terraform-provider-aws: terraform refresh fails for aws_launch_configuration with InvalidAMIID.NotFound

Created on 13 Jun 2017  路  26Comments  路  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @matt-deboer as hashicorp/terraform#13433. It was migrated here as part of the provider split. The original body of the issue is below._


Terraform Version

0.9.2

Affected Resource(s)

  • aws_launch_configuration

Terraform Configuration Files [example]

...
## get the ami id from latest packer build
data "aws_ami" "widget" {
  filter {
    name = "name"
    values = ["widget-template"]
  }
  owners = ["self"]
}

resource "aws_launch_configuration" "widget" {
  ...
  image_id = ""${data.aws_ami.widget.id}""
  ...
}

Error output:

Error refreshing state: 1 error(s) occurred:

* module.aws_agent.aws_launch_configuration.widget: aws_launch_configuration.widget: InvalidAMIID.NotFound: The image id '[ami-xxxxxxxx]' does not exist
    status code: 400, request id: a892c167-7942-4781-8af8-8f62dc57437a

Expected Behavior

terraform refresh should be able to succeed, even when the AMI associated with the current aws_launch_configuration has been deleted.
terraform plan should show the resources (and affected dependencies) as needing change/rebuild.

Actual Behavior

terraform refresh encounters the error mentioned above

Steps to Reproduce

  1. Construct an terraform configuration, using an aws_launch_configuration which pulls it's image_id from a datasource, as in the example.
  2. terraform apply to create the resources
  3. delete the AMI (replace with a new/updated version)
  4. run terraform refresh on the stack

Important Factoids

Our typical process involves the following steps:

  1. Build AMI with packer, and upload to AWS
  2. Provision ASGs using an aws_ami data source to reference the latest version of the AMI built in packer
  3. More recently, we've started cleaning up old AMI's that are no-longer needed, including some of the AMIs produced in step#1, after we have uploaded a newer version of that AMI.
    -- we did confirm that there is exactly 1 version of the particular AMI present, but none of the previous versions.
  4. We're able to work around this by using terraform state rm on the affected aws_launch_configuration instances (since they'll be re-created anyway)
bug servicec2

Most helpful comment

I ran into this issue; my workaround was to delete the state for the impacted launch configuration:

terraform state rm module.asg.aws_launch_configuration.widget

When I ran terraform apply, it created a new launch configuration, and I had to manually delete the old one.

All 26 comments

it would be nice to fix this. it is a pain to fix this once encountered, especially with a large amount of launch configurations. happens as of 0.10.6

This is still happening in v0.11.1 馃槥

This is also still happening in v0.11.6.

Same for v0.11.7

This is still happing in terraform 0.11.6, aws provider 1.14

@argusua @saiya Are you still facing the issue? I couldn't reproduce the issue. If so, please provide the steps in details.

I occupied clean VPN and tried to create/modify/delete each aws_launch_configuration and AMI. But still failed to reproduce issue...

Hi, @saravanan30erd
I can't reproduce now with:
Terraform v0.11.7

  • provider.aws v1.24.0
  • provider.template v1.0.0

@bflad Looks like the issue is fixed, please verify and close the issue.

This is still an issue for me with:

  • provider.aws v1.24.0
  • Terraform v0.11.7.

I get:

``` Error: Error refreshing state: 1 error(s) occurred:

  • module.asg.aws_launch_configuration.this: 1 error(s) occurred:

  • module.asg.aws_launch_configuration.this: aws_launch_configuration.this: InvalidAMIID.NotFound: The image id '[ami-xxxxxxxx]' does not exist
    status code: 400, request id: aea9c0e9-5274-46ea-ac21-703052566614 ```

I still get this with the latest terraform and aws provider (according to brew upgrade and terraform init -upgrade:

Terraform v0.11.7
+ provider.aws v1.27.0
+ provider.template v1.0.0

Issue is most probably in this part of code in resourceAwsLaunchConfigurationRead:
https://github.com/terraform-providers/terraform-provider-aws/blob/fd9a3f78c2a83c4821b6cf143ee19cc32002c796/aws/resource_aws_launch_configuration.go#L557-L559

Where it tries to read block devices of AMI during resource refresh. This call fails when ami does not exist.

Ideas how can this be fixed? Should we just drop lc from state and d.SetId("") when we get specific InvalidAMIID.NotFound error during refresh?

Still a case with

Terraform v0.11.8
+ provider.aws v1.33.0
+ provider.terraform v0.1.0

PS. This is a regression. It have worked in the past, because I've updated my AMIs, changed my terraforming and then deleted the old AMI (without running TF first).

This seems to be still an issue:

`Error: Error refreshing state: 2 error(s) occurred:

  • module.WURFLjs-ap-southeast-1.aws_launch_configuration.WURFLjs-go-green: 1 error(s) occurred:

  • module.WURFLjs-ap-southeast-1.aws_launch_configuration.WURFLjs-go-green: aws_launch_configuration.WURFLjs-go-green: InvalidAMIID.NotFound: The image id '[ami-xxxxx]' does not exist
    status code: 400, request id: ae8a4a35-6ace-4732-92e1-478a4b03c575

  • module.WURFLjs-us-east-2.aws_launch_configuration.WURFLjs-go-green: 1 error(s) occurred:

  • module.WURFLjs-us-east-2.aws_launch_configuration.WURFLjs-go-green: aws_launch_configuration.WURFLjs-go-green: InvalidAMIID.NotFound: The image id '[ami-xxxxx]' does not exist
    status code: 400, request id: 7d39284f-50b1-4361-9a9b-2d1b0cf7a0ad

Terraform v0.11.8

  • provider.aws v1.38.0`

This issue is still occurring for terraform v0.11.10, aws provider version 1.4 and template version 1.0

Still a bug:

Terraform v0.11.10
+ provider.aws v1.50.0

anyone have a workaround to this? I tried tainting the resource in the state file. but received the error message that the resource cannot be tainted.

looks like this is still an issue; fingers crossed it's addressed before the issue's two year anniversary 馃憤

tried deleting out of state to no avail; looks like an api call is the culprit. going to try a few other ideas.

I ran into this issue; my workaround was to delete the state for the impacted launch configuration:

terraform state rm module.asg.aws_launch_configuration.widget

When I ran terraform apply, it created a new launch configuration, and I had to manually delete the old one.

The same issue happened to me with Terraform v0.11.14 + provider.aws v2.22.0. Unfortunately I'm unable to reproduce it with a plain Terraform snippet that can be posted here. It appears to be happening under particular circumstances, which I don't know at the moment.

Also experienced this issue

Terraform v0.11.14
+ provider.aws v2.32.0

Hello guys,

same behavior with :

$ terraform --version
Terraform v0.12.12

Still occuring with :

Terraform v0.12.13
+ provider.aws v2.29.0

Still still occurring with:

Terraform v0.12.24
+ provider.aws v2.56.0
+ provider.template v2.1.2

Seeing same issue still with:

Terraform v0.12.24
+ provider.aws v2.61.0

Does anyone have updates on this issue?

I also found the same issue with
Terraform v0.12.25

But I have solved the issue with a trick,
==> As I created a local which contains the AMI-ID of the AMI image created by the packer.
like this

locals {
  ami_ID = data.aws_ami.<AMI_resource_name>.id
}

Just define the local block in the root module or file, just before using the AWS launch template and after the data block of the aws_ami.

And in the AWS launch template in the image_id use the local variable like this:-


resource "aws_launch_template" "ALT" {
  name = "NEW-ALT"
  image_id      = local.ami_ID
............
}


And this works just great.

Was this page helpful?
0 / 5 - 0 ratings