Aws-cli: aws ec2 wait conversion-task-completed times out

Created on 17 Apr 2015  路  19Comments  路  Source: aws/aws-cli

I'm trying to migrate to the unified cli, and to use waiters instead of polling loops, but the conversion-task-completed waiter consistently times out in us-east-1 for my 12GB disk.

It reports
Waiter ConversionTaskCompleted failed: Max attempts exceeded
but the task does complete successfully. I can write a loop which calls the waiter, but I'd prefer to increase the number of attempts, or reduce their frequency. They appear to be taken from botocore/data/aws/ec2/2014-09-01.waiters.json, which contains the following:

"ConversionTaskCompleted": { "delay": 15, "operation":
"DescribeConversionTasks", "maxAttempts": 40,

I'd like these to be exposed as parameters in the aws cli command, or perhaps for the timeout to be derived from the size of the disk being converted.

feature-request

Most helpful comment

We received this 'failure' last night during some maintenance work.
17:58:41 aws rds wait db-snapshot-completed --db-snapshot-identifier mynewsnapshot --db-instance-identifier mydbinstance
18:08:29 RDS: Waiter DBSnapshotCompleted failed: Max attempts exceeded

The DB we were doing the snapshot on was only about 10GB in size.

Any chance this 4 year old feature request will be implemented soon?

All 19 comments

@arvan-pritchard

Being able to modify the delays and max attempts is something that we want to expose. We just need to put the work in to expose the parameters. As for your timeout, how many times does it take for your wait command to succeed (i.e. not timeout)? In the meantime, we can bump up the delay and max attempts for that particular waiter. We just need to know how far off the wait is currently.

Sorry' I've not measured it - while I was trying to use a single invocation of the waiter it failed 3 times but was near or fully complete before I noticed and checked the conversion task status manually.

Our previous code using the old APIs allowed 1 hour for this step, and I have now coded a loop repeating the waiter for up to an hour and that works.

While developing the script I used the Frankfurt region and never saw the waiter time out, so
I'd guess that it probably completes on the second wait invocation.

No worries. We will look into increasing the delay time or max attempts and enabling the adjustment of these parameters as well.

I'm currently encountering this timeout issue on the similar aws ec2 wait image-available operation. Is this still unresolved?

@wjordan we get around it by retrying the wait on a loop

As mentioned in one of the comments above, similar behavior is observed using image-available:

aws ec2 wait image-available --image-ids ${image-id} --filter "Name=state,Values=available"

This times out after 20 minutes or so.

+1 on this. I've been having issues with aws ec2 wait volume-available and aws ec2 wait snapshot-completed timing out as well for large volumes (75GB+) making this feature completely useless.

We ultimately solved this by switching to CloudFormation, via troposphere :)

I solved it by doing an explicit check for volume state and snapshot progress until it's solved upstream. Feel free to copy/paste from here.

Good Morning!

We're closing this issue here on GitHub, as part of our migration to UserVoice for feature requests involving the AWS CLI.

This will let us get the most important features to you, by making it easier to search for and show support for the features you care the most about, without diluting the conversation with bug reports.

As a quick UserVoice primer (if not already familiar): after an idea is posted, people can vote on the ideas, and the product team will be responding directly to the most popular suggestions.

We鈥檝e imported existing feature requests from GitHub - Search for this issue there!

And don't worry, this issue will still exist on GitHub for posterity's sake. As it鈥檚 a text-only import of the original post into UserVoice, we鈥檒l still be keeping in mind the comments and discussion that already exist here on the GitHub issue.

GitHub will remain the channel for reporting bugs.

Once again, this issue can now be found by searching for the title on: https://aws.uservoice.com/forums/598381-aws-command-line-interface

-The AWS SDKs & Tools Team

Based on community feedback, we have decided to return feature requests to GitHub issues.

Did a fix come for this issue ?
I still get : Waiter SnapshotCompleted failed: Max attempts exceeded
when trying to wait for a snapshot creation to be completed. My snapshot is 70G so it does take a while to finish.

Second on this. I have volumes up in the TB range. Am seeing:

Waiter SnapshotCompleted failed: Max attempts exceeded

when attempting to take a snapshot. Not sure how to proceed from here.

Same here for RDS snapshots. ;(

same here.

Same here for CloudFront invalidation.

We received this 'failure' last night during some maintenance work.
17:58:41 aws rds wait db-snapshot-completed --db-snapshot-identifier mynewsnapshot --db-instance-identifier mydbinstance
18:08:29 RDS: Waiter DBSnapshotCompleted failed: Max attempts exceeded

The DB we were doing the snapshot on was only about 10GB in size.

Any chance this 4 year old feature request will be implemented soon?

Same here for ecs wait services-stable with the default deregistration delay.

Hi there, is there any update on this issue? It would be great to have the number of attempts for aws ecs wait services-stable be configurable. Thanks

Was this page helpful?
0 / 5 - 0 ratings