Packer: Timeout waiting for long running Amazon EBS AMI copies on v1.2.5

Created on 25 Jul 2018  ·  18Comments  ·  Source: hashicorp/packer

This is probably more a lack of documentation around the attempts and polling delay than a functional issue. Also, the error should be more verbose by default. The output below was with PACKER_LOG=1 set

[14:22:45]  ==> amazon-ebs: Waiting for AMI to become ready...
[14:22:45]  2018/07/25 04:22:45 packer.exe: 2018/07/25 04:22:45 No AWS timeout and polling overrides have been set. Packer will defalt to waiter-specific delays and timeouts. If you would like to customize the length of time between retries and max number of retries you may do so by setting the environment variables AWS_POLL_DELAY_SECONDS and AWS_MAX_ATTEMPTS to your desired values.
[14:33:19]  2018/07/25 04:33:19 packer.exe: 2018/07/25 04:33:19 Error waiting for AMI: ResourceNotReady: exceeded wait attempts
[14:33:19]  ==> amazon-ebs: Error waiting for AMI. Reason: <nil>
[14:33:19]  2018/07/25 04:33:19 ui error: ==> amazon-ebs: Error waiting for AMI. Reason: <nil>
[14:33:19]  2018/07/25 04:33:19 ui: ==> amazon-ebs: Terminating the source AWS instance...
[14:33:19]  ==> amazon-ebs: Terminating the source AWS instance...
[14:33:19]  2018/07/25 04:33:19 packer.exe: 2018/07/25 04:33:19 No AWS timeout and polling overrides have been set. Packer will defalt to waiter-specific delays and timeouts. If you would like to customize the length of time between retries and max number of retries you may do so by setting the environment variables AWS_POLL_DELAY_SECONDS and AWS_MAX_ATTEMPTS to your desired values.
[14:33:36]  2018/07/25 04:33:36 ui: ==> amazon-ebs: Cleaning up any extra volumes...
[14:33:36]  ==> amazon-ebs: Cleaning up any extra volumes...
[14:33:36]  2018/07/25 04:33:36 ui: ==> amazon-ebs: Deleting temporary security group...
[14:33:36]  ==> amazon-ebs: Deleting temporary security group...
[14:33:37]  ==> amazon-ebs: Deleting temporary keypair...
[14:33:37]  2018/07/25 04:33:37 ui: ==> amazon-ebs: Deleting temporary keypair...
[14:33:37]  2018/07/25 04:33:37 [INFO] (telemetry) ending amazon-ebs
[14:33:37]  Build 'amazon-ebs' errored: Error waiting for AMI. Reason: <nil>
[14:33:37]  2018/07/25 04:33:37 ui error: Build 'amazon-ebs' errored: Error waiting for AMI. Reason: <nil>
[14:33:37]  2018/07/25 04:33:37 Builds completed. Waiting on interrupt barrier...
[14:33:37]  
[14:33:37]  2018/07/25 04:33:37 machine readable: error-count []string{"1"}
[14:33:37]  ==> Some builds didn't complete successfully and had errors:
[14:33:37]  2018/07/25 04:33:37 ui error:
[14:33:37]  --> amazon-ebs: Error waiting for AMI. Reason: <nil>
[14:33:37]  ==> Some builds didn't complete successfully and had errors:
[14:33:37]  
[14:33:37]  2018/07/25 04:33:37 machine readable: amazon-ebs,error []string{"Error waiting for AMI. Reason: <nil>"}
[14:33:37]  ==> Builds finished but no artifacts were created.
[14:33:37]  2018/07/25 04:33:37 ui error: --> amazon-ebs: Error waiting for AMI. Reason: <nil>
bug buildeamazon duplicate regression

Most helpful comment

Yeah, setting AWS_MAX_ATTEMPTS and AWS_POLL_DELAY_SECONDS was the resolution in the environment I'm working in.

The copy would take ~20 minutes total, but I set the values of each env var to 60 and 60 respectively (so 1 hour of polling at 1 minute intervals), which worked well for me.

All 18 comments

I have the same issue

I'm facing some issues copying the generated AMIs in different AWS regions, using the same Packer version (1.2.5)

2018/07/25 12:43:12 ui: ==> amazon-ebs: Deregistering the AMI because cancellation or error...
==> amazon-ebs: * Error waiting for AMI (ami-c6cf96be) in region (us-west-2): ResourceNotReady: exceeded wait attempts
==> amazon-ebs: Deregistering the AMI because cancellation or error...
2018/07/25 12:43:12 ui: ==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Terminating the source AWS instance...
2018/07/25 12:43:13 packer: 2018/07/25 12:43:13 No AWS timeout and polling overrides have been set. Packer will defalt to waiter-specific delays and timeouts. If you would like to customize the length of time between retries and max number of retries you may do so by setting the environment variables AWS_POLL_DELAY_SECONDS and AWS_MAX_ATTEMPTS to your desired values.
==> amazon-ebs: Cleaning up any extra volumes...
2018/07/25 12:43:28 ui: ==> amazon-ebs: Cleaning up any extra volumes...
2018/07/25 12:43:28 ui: ==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: No volumes to clean up, skipping
2018/07/25 12:43:28 ui: ==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary security group...
2018/07/25 12:43:28 [INFO] (telemetry) ending amazon-ebs
2018/07/25 12:43:28 ui error: Build 'amazon-ebs' errored: 1 error(s) occurred:

* Error waiting for AMI (ami-c6cf96be) in region (us-west-2): ResourceNotReady: exceeded wait attempts
2018/07/25 12:43:28 Builds completed. Waiting on interrupt barrier...
2018/07/25 12:43:28 machine readable: error-count []string{"1"}
2018/07/25 12:43:28 ui error:
==> Some builds didn't complete successfully and had errors:
2018/07/25 12:43:28 machine readable: amazon-ebs,error []string{"1 error(s) occurred:\n\n* Error waiting for AMI (ami-c6cf96be) in region (us-west-2): ResourceNotReady: exceeded wait attempts"}
2018/07/25 12:43:28 ui error: --> amazon-ebs: 1 error(s) occurred:

* Error waiting for AMI (ami-c6cf96be) in region (us-west-2): ResourceNotReady: exceeded wait attempts
2018/07/25 12:43:28 ui:
==> Builds finished but no artifacts were created.
2018/07/25 12:43:28 [INFO] (telemetry) Finalizing.
Build 'amazon-ebs' errored: 1 error(s) occurred:

* Error waiting for AMI (ami-c6cf96be) in region (us-west-2): ResourceNotReady: exceeded wait attempts

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: 1 error(s) occurred:

* Error waiting for AMI (ami-c6cf96be) in region (us-west-2): ResourceNotReady: exceeded wait attempts

==> Builds finished but no artifacts were created.
2018/07/25 12:43:29 waiting for all plugin processes to complete...
2018/07/25 12:43:29 /usr/local/bin/packer: plugin process exited

I think we'll try to bump the defaults but please try increasing AWS_MAX_ATTEMPTS as a workaround

This is probably due to the same underlying issues as #6526, so I'm going to mark as a duplicate. Not closing because this seems to affect enough people that I want to have an issue open for them to find.

Please let us know if setting the env variable AWS_MAX_ATTEMPTS helps, and what you set it to. It'll help us figure out the best default value.

Yeah, setting AWS_MAX_ATTEMPTS and AWS_POLL_DELAY_SECONDS was the resolution in the environment I'm working in.

The copy would take ~20 minutes total, but I set the values of each env var to 60 and 60 respectively (so 1 hour of polling at 1 minute intervals), which worked well for me.

👍 Have this issue copying AMI from us-west-2 to eu-central-1. This issue cropped up when I updated to 1.2.5. Did not occur in 1.2.4.

So it sounds like for most people the waiter whose defaults are two low is the AMI Copy one. I can manually modify the defaults on that waiter, or we can try to give feedback to AWS to bump the defaults in their SDK.

brew switch packer 1.2.4 fixed the problem for me

I am seeing this issue as well - I will try the AWS_MAX_ATTEMPTS and AWS_POLL_DELAY_SECONDS settings and report back.

any updates?

@chenghuang-mdsol Not really; I'm still waiting for more user input about what values of AWS_MAX_ATTEMPTS and AWS_POLL_DELAY_SECONDS are preventing this issue. I still intend to address this by the next release, but I'm letting this collect user input in the meantime.

Okay, I figured out why this wait was affecting so many of you check out the linked pr for more details; I'm attaching a linux build of #6601 which should have the timeout set higher -- 30 minutes instead of 10. Let me know if this is too low still.
packer.zip

Can any of you comment as to whether #6601 solved your problem?

I'm not currently working in the environment where I was experiencing this issue.

I'll try and spin up a test AMI using the existing build (to replicate the issue), and then using this build as soon as I get the chance.

EDIT: Testing now. I'll let you know how it goes.

I have to use Docker in the AWS build environment that I'm having this issue in so I'm pulling down the latest official version each time. Is there a ready made docker image of preview builds by any chance?

@SwampDragons I can confirm that I replicated the issue again in Packer v1.2.5, and it appears to be resolved in the build which you posted above. Seems like #6601 does fix the problem for me.

Thanks.

@embusalacchi nope, sorry.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings