Packer: ResourceNotReady: failed waiting for successful resource state

Created on 6 Aug 2019  ·  11Comments  ·  Source: hashicorp/packer

Hello,
I have tried Packer version 1.3, 1.4, 1.4.1, 1.4.2, and the current nightly build.
Unfortunately, I am still running into the same error I have reported before.

==> ami: Copying/Encrypting AMI (ami-) to other regions... ami: Copying to: us-east-1 ami: Waiting for all copies to complete... ==> ami: 1 error(s) occurred: ==> ami: ==> ami: * Error waiting for AMI (ami-) in region (us-east-1): ResourceNotReady: failed waiting for successful resource state
I have set these variables prior to running the packer build.

  • export AWS_POLL_DELAY_SECONDS=30
  • export AWS_MAX_ATTEMPTS=3000
  • export AWS_TIMEOUT_SECONDS=3000
  • export TMPDIR=/home/$USER/tmp

And I ensured my user has kms:* permissions and I am using the right KMS key ID and not the ARN.

Any help would be much appreciated.

bug buildeamazon

All 11 comments

Does the AMI ever show up in us-east-1? Or is there any error message in
the copy job?

On Tue, Aug 6, 2019, 23:05 Rephric notifications@github.com wrote:

Hello,
I have tried Packer version 1.3, 1.4, 1.4.1, 1.4.2, and the current
nightly build.
Unfortunately, I am still running into the same error I have reported
before.

==> ami: Copying/Encrypting AMI (ami-) to other regions... ami: Copying
to: us-east-1 ami: Waiting for all copies to complete... ==> ami: 1
error(s) occurred: ==> ami: ==> ami: * Error waiting for AMI (ami-) in
region (us-east-1): ResourceNotReady: failed waiting for successful
resource state
I have set these variables prior to running the packer build.

`

  • export AWS_POLL_DELAY_SECONDS=30
  • export AWS_MAX_ATTEMPTS=3000
  • export AWS_TIMEOUT_SECONDS=3000
  • export TMPDIR=/home/$USER/tmp
    `

And I ensured my user has kms:* permissions and I am using the right KMS
key ID and not the ARN.

Any help would be much appreciated.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/packer/issues/7954?email_source=notifications&email_token=AAEIFA6WQA3RJSUWLFEHT4DQDHRQ7A5CNFSM4IJ2HV52YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HDXQCBQ,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEIFA6RUTUE5QGLHTFDHLTQDHRQ7ANCNFSM4IJ2HV5Q
.

The AMI does show up in us-east-1. When it fails it actually registers in the AWS frontend with the status as failed and it does not have the right name. It also registers the reason as Copy image failed with an internal error. Is this a known issue?

I was running into the same problem, and it ended up being due to a combination of 1) needing to use the ARN since it was copying across accounts and 2) requiring kms:CreateGrant permissions

We started seeing this problem today on configuration that hasn’t been touched for months and we were using it every couple of days.

@vvucetic that sounds a lot like your kms key just expired.

@vvucetic that sounds a lot like your kms key just expired.

But I’m not copying encrypted AMI. I don’t think KMS key is in use at all here.
I’ll try to increase timeout tomorrow just to eliminate that.

So, today it went fine. Copying to 3 regions took 10m, while yesterday was running out of attempts after 30m. I'm closing my case as AWS performance related issue. Thanks for help!

I think in order to keep this issue open and investigate further we need an example of a Packer template that you're having trouble with; we need the simplest possible template that reproduces the issue, so that I can narrow it down to specific kinds of region copies, or copies across users, or whatever. Obviously you can remove any credentials and I'll generate my own to try to replicate. If you can provide that I'll try to take a closer look at this issue.

I think in order to keep this issue open and investigate further we need an example of a Packer template that you're having trouble with; we need the simplest possible template that reproduces the issue, so that I can narrow it down to specific kinds of region copies, or copies across users, or whatever. Obviously you can remove any credentials and I'll generate my own to try to replicate. If you can provide that I'll try to take a closer look at this issue.

In my case the problem went away the next day so I assume it was due to performance issues on AWS side that day.

👍 Thanks @vvucetic. I'll leave open another week or so to give the OP a chance to update the ticket, then I'll close and assume they, too, had a transient server-side error.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Tensho picture Tensho  ·  3Comments

shantanugadgil picture shantanugadgil  ·  3Comments

mushon4 picture mushon4  ·  3Comments

mwhooker picture mwhooker  ·  3Comments

s4mur4i picture s4mur4i  ·  3Comments