Since boto/botocore#1260, users can configure the max retry attempts for any client call through the botocore.config.Config
object using the retries={'max_attempts': 10}
config option. I'd like this configuration option to be available through aws-cli
, e.g., via a --max-attempts 10
command-line argument and/or a configuration parameter in ~/.aws/config
.
@wjordan - Thank you for reaching out. I have marked this as a feature request pending further review and additional 馃憤.
My deployment system breaks for the single reason that it keeps hitting Throttling errors on the CLI. We definitely need further configurability of this.
Ideally, we should also be able to use exponential backoff/custom retry as is available in the Node SDK.
I am also facing the throttling issue while updating route53 recordsets through aws cli. This feature will definitely gonna help.
My use case: I have several jobs that run in parallel and often trip API throttling limits. These operations are not time-sensitive and can tolerate large backoff time; it is more important that they are successful. For these types of jobs, being able to easily configure a custom backoff base time or specify max retries would be very helpful.
This feature (or any feature that allows configuring retry in CLI) would save my team a lot of work. Without it, we have to implement some custom backoff/retry or similar.
Our use case are jobs that use the CLI, we don't care how slow the jobs are but we can't have them failing due to throttling.
this would also useful for local development when testing error handling. it's possible to set it in a boto-specific config file, but then that applies to all calls one would make
https://boto.readthedocs.io/en/2.6.0/boto_config_tut.html
@waliferus
Have you tried using the boto_config - can't find any references that this would actually work together with aws-cli
@patrickjahns I ended up getting something that worked by disgustingly patching the retry count in the config files of boto (_config.json or something in the source code) so I guess if they have added some user configuration there that may indeed be a good lead (hopefully you can set this in a non-ugly way).
@Ten0 could you share what you patched and how? Lack of this feature is killing me :-|
@Ten0 could you share what you patched and how? Lack of this feature is killing me :-|
I was hoping the lack of it for so long would eventually get AWS to publish a proper option for this but nevermind. :(
I have no idea how we're supposed to be able to make a simple deployment pipeline that works consistently using the CLI without this feature...
In /usr/local/lib/python3.5/dist-packages/botocore/data/_retry.json
(path may be different depending on your python version and operating system), around line 90 there's something called max_attempts
that is set to 5, which I set to 50.
"retry": {
"__default__": {
"max_attempts": 50,
"delay": {
"type": "exponential",
"base": "rand",
"growth_factor": 2
},
Of course this is a patch that has to be re-applied regularly because any update to botocore
erases it.
In our deployment system, we use a patch
command in the Dockerfile of the system that runs the deployment, right after the command that installs the awscli.
@Ten0 could you share what you patched and how? Lack of this feature is killing me :-|
I was hoping the lack of it for so long would eventually get AWS to publish a proper option for this but nevermind. :(
I have no idea how we're supposed to be able to make a simple deployment pipeline that works consistently using the CLI without this feature...In
/usr/local/lib/python3.5/dist-packages/botocore/data/_retry.json
(path may be different depending on your python version and operating system), around line 90 there's something calledmax_attempts
that is set to 5, which I set to 50."retry": { "__default__": { "max_attempts": 50, "delay": { "type": "exponential", "base": "rand", "growth_factor": 2 },
Of course this is a patch that has to be re-applied regularly because any update to
botocore
erases it.
In our deployment system, we use apatch
command in the Dockerfile of the system that runs the deployment, right after the command that installs the awscli.
This is what I had to do for this 馃槵 I'm gonna burn in hell. But thanks @Ten0 for the idea wich actually works.
TEMP_FILE=$(mktemp)
BOTO_FILE_YUCK=$(find /usr/local/lib/python*/site-packages/botocore/data/_retry.json || exit 1)
jq '.retry.__default__.max_attempts = 50' ${BOTO_FILE_YUCK} > ${TEMP_FILE} && mv ${TEMP_FILE} ${BOTO_FILE_YUCK}
ATTEMPTS_LIMIT=$(cat ${BOTO_FILE_YUCK} | jq '.retry.__default__.max_attempts')
echo "aws/cli ATTEMPTS_LIMIT: ${ATTEMPTS_LIMIT}"
https://docs.aws.amazon.com/cli/latest/topic/config-vars.html
AWS_MAX_ATTEMPTS
AWS_RETRY_MODE
Hi @wjordan this is now supported, as @sbe-arg notes. Thanks!
Thanks for the update! For cross-reference purposes it looks like this was implemented earlier this year as part of the newly-added retry feature added in boto/botocore#1972 and documented for the AWS CLI by #4959, and published in versions >= 1.18.0 of the AWS CLI and >= 1.15.0 of botocore.
Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
Most helpful comment
My deployment system breaks for the single reason that it keeps hitting Throttling errors on the CLI. We definitely need further configurability of this.
Ideally, we should also be able to use exponential backoff/custom retry as is available in the Node SDK.