On v1.3.1, we're seeing the error below at the very end of AMI creation. It happens on roughly 20% of our AMI builds. We've set AWS_POLL_DELAY_SECONDS=15. For reference, we've never seen this error on v1.2.5.
Is there setting we should adjust or is something introduced in the latest version?
==> amazon-ebs: Waiting for AMI copy to become ready...
==> amazon-ebs: Error waiting for AMI Copy: ResourceNotReady: exceeded wait attempts
==> amazon-ebs: Deregistering the AMI because cancellation or error...
==> amazon-ebs: Cancelling the spot request...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
Build 'amazon-ebs' errored: Error waiting for AMI Copy: ResourceNotReady: exceeded wait attempts
Template:
https://gist.github.com/rtopiwala-oportun/f8571943648f908c224fa8babaecc2ed
After test runs of 50+ builds, I can safely say that eliminating AWS_POLL_DELAY_SECONDS=15 from env vars in our script has resolved the error. Hopefully this helps someone with similar issue.
Thanks for reporting this. I'm going to reopen because I think it means something is wacky with our defaulting.
@SwampDragons with amazon-import
I needed to increase the timeouts to complete even a basic import.
@rickard-von-essen do you have a suggestion for a better wait default?
I'll do some testing now.
Just some numbers, importing a minimal (bento) CentOS 7 image takes 27m1s. Of this only 1m43s is the upload to S3 the rest is consumed waiting for AWS AMI import. I think we need to wait at least for 1h the Waiting for task import-ami-... to complete (may take a while)
step.
1h seems fair. I'll adjust the defaults.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.