Boto3: Feature request: Disable automatic retries for lambda invoke

Created on 26 May 2017  路  10Comments  路  Source: boto/boto3

tldr

When calling client.invoke for lambda functions, I want to disable automatic retries upon timeout.

Steps to reproduce:

import boto3
import json
client = boto3.client('lambda')
response = client.invoke(
    InvocationType='RequestResponse',
    FunctionName='my_lambda',
    LogType='Tail',
    Payload=json.dumps({})
)

Where my_lambda is configured to timeout after 100 seconds, but the code itself requires more time than that.

Desired behaviour

For my particular case, I want client.invoke to return in no more than 100 seconds. If the function requires more than that, client.invoke should raise a botocore.vendored.requests.exceptions.ReadTimeout exception after 100 seconds (or maybe 101 seconds if you factor in overhead).

Observed Behaviour

client.invoke raises the exception botocore.vendored.requests.exceptions.ReadTimeout after 626 seconds. The CloudWatch logs show that the lambda function was invoked 8 times, even though client.invoke was only called once.

(Yes, I know that (8-1) * 100 > 626. I'm confused by that too)

Justification

For this particular case, I know with certainty that if the lambda timed out the first time, it will time out the next 5 times also. So there's no point retrying. I understand that normally automatic retries may be handy. So I propose that an option is added to client.invoke, which allows you to disable automatic retries.

feature-request

Most helpful comment

this is my workaround about this issue

config = botocore.config.Config(connect_timeout=300, read_timeout=300)
client = boto3.client('lambda', region_name='us-east-1', config=config)
....
....
# set retry config for lambda = 0
client.meta.events._unique_id_handlers['retry-config-lambda']['handler']._checker.__dict__['_max_attempts'] = 0
....
....
response = client.invoke(
                    FunctionName='',
                    InvocationType='',
                    LogType='',
                    Payload=''
                )

All 10 comments

You can modify your timeout using a config object and passing in whatever timeout values make sense for your particular lambda function's timeout (for example like you said a few seconds longer than the lambda function's to account for overhead.)

The retries you are seeing are not synced with your lambda function's timeout so it is getting re invoked. Does that meet your use case or do you still want this feature request?

I just tried doing that:

botocore.config.Config('read_timeout',timeout+5)
botocore.config.Config('connection_timeout',timeout+5)
...
client = boto3.client('lambda')
...

Where timeout is the number of seconds I configured my lambda to timeout after. (Configured through cloudformation, confirmed manually through web console). For now, I've set the contents of my lambda to be

while 1:
    time.sleep(10)

I still observe the same behavior.

Looking at the timestamps in cloudformation, the time between retries is approximately 60 seconds.

this is my workaround about this issue

config = botocore.config.Config(connect_timeout=300, read_timeout=300)
client = boto3.client('lambda', region_name='us-east-1', config=config)
....
....
# set retry config for lambda = 0
client.meta.events._unique_id_handlers['retry-config-lambda']['handler']._checker.__dict__['_max_attempts'] = 0
....
....
response = client.invoke(
                    FunctionName='',
                    InvocationType='',
                    LogType='',
                    Payload=''
                )

I will mark this as a feature request since I think its pretty justified based on the lambda use case.

I can confirm that the workaround by @tnpxu works for me.

The workaround by @tnpxu worked for me too.

Man! @tnpxu Thank you! Works like a charm!

Looks like this was fixed with a cleaner solution in boto/botocore#1260:

import boto3
import botocore.config

cfg = botocore.config.Config(retries={'max_attempts': 0})
client = boto3.client('lambda', config=cfg)

We tried @ryantuck solution for calling a lambda function asynchronous and it did retry. The request header produced by the lambda client included the RetryAttempts option, but it had no impact.

{'ResponseMetadata': {'RequestId': 'XXXXXX', 'HTTPStatusCode': 202, 'HTTPHeaders': {'date': 'Wed, 26 Jun 2019 09:56:08 GMT', 'content-length': '0', 'connection': 'keep-alive', 'x-amzn-requestid': 'XXXXXX', 'x-amzn-remapped-content-length': '0', 'x-amzn-trace-id': 'root=XXXXX;sampled=0'}, 'RetryAttempts': 0}, 'StatusCode': 202, 'Payload': <botocore.response.StreamingBody object at 0x112904a20>}

The lambda function retried 2 times, for a total of 3 runs, as specified here: https://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.html

So probably this is not a boto3 problem (the header has the desired request), but maybe a lack of functionality on the S3 api itself?

@ryantuck @stealthycoin What if we want to configure the maximum age of the event as well as the max attempts?

Was this page helpful?
0 / 5 - 0 ratings