Aws-lambda-dotnet: Lambda memory consumption issues

Created on 15 Oct 2018  路  7Comments  路  Source: aws/aws-lambda-dotnet

Recently we have been experiencing an increase in constantly growing memory usage of certain functions, resulting in Process exited before completing request and a new lambda instance.

We first noticed this on 2018-09-11 at 12:43:06. Shortly before that we had made a deployment changing the runtime from dotnetcore2.0 to 2.1. No other significant changes were made to the code that could cause a memory leak and it was running without errors before that deployment.

I've reproduced the error with the following code:

public class DynamoDBTrigger
{
    private IAmazonDynamoDB _ddbClient;

    public DynamoDBTrigger()
    {
        _ddbClient = new AmazonDynamoDBClient();
    }

    [LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
    public void Process(DynamoDBEvent ev, ILambdaContext context)
    {
        foreach (var record in ev.Records)
        {
            var item = Convert(record.Dynamodb.NewImage);
            context.Logger.Log(JsonConvert.SerializeObject(item));                
        }
    }

    private Item Convert(Dictionary<string, AttributeValue> attributeMap)
    {
        using (var context = new DynamoDBContext(_ddbClient))
        {
            var doc = Document.FromAttributeMap(attributeMap);
            return context.FromDocument<Item>(doc, new DynamoDBOperationConfig { OverrideTableName = Environment.GetEnvironmentVariable("Table") });
        }
    }
}

I'm writing one row per second to the triggering tableand I'm seeing a memory increase of about 1MB per 3 invocations leading to it sitting at 128MB for a while before logging this:

START RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b Version: $LATEST
END RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b
REPORT RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b  Duration: 2293.32 ms    Billed Duration: 2300 ms Memory Size: 128 MB    Max Memory Used: 128 MB 
RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b Process exited before completing request

I have tried and same thing happens with a 256MB configuration and the increase seems linear.

I have observed a similar thing with an API Gateway Lambda proxy which uses ApiGatewayProxyFunction entrypoint. In that case the memory increases linearly until it reaches the limit where it stays for a while before ASP.NET seems to want to clear up some memory:
image
Highlighting three observations:
1) high duration on invocation before and after garbage collection
2) Max memory Used dropped from 128MB to 68MB
3) Same log stream used, so lambda instance is not discarded

I feel that we haven't changed anything or dragged in any new packages. Could it be possible that the release of .NET Core 2.1.4 around the same time last week could have caused this change in behaviour?

bug closed-for-staleness modullambda-client-lib response-requested

Most helpful comment

I have also noticed a repeatable pattern. When I perform a publish on a C# lambda and then run it, and look at its memory consumption, it has a value, say 74MB. When I simply republish it again, no change, no recompilation, it will then run at a lower memory consumption, say 66MB

I am also seeing projects that were fine are now exceeding the 128MB limit and the process terminating prematurely.

All 7 comments

I have also noticed a repeatable pattern. When I perform a publish on a C# lambda and then run it, and look at its memory consumption, it has a value, say 74MB. When I simply republish it again, no change, no recompilation, it will then run at a lower memory consumption, say 66MB

I am also seeing projects that were fine are now exceeding the 128MB limit and the process terminating prematurely.

128MB can barely start the process let alone run it effectively. Having recently tested cold start performance for a relatively simple ASP.NET Core Lambda Proxy implementation, using every available memory limit, I can say with good confidence that the commenters before me are thrashing the GC throughout the request lifecycle and causing their own memory issues.

They will observe low single-digit-second cold starts and sub-to-low-millisecond response times if they increase their memory limit to a level, suitable for the memory needs of their application, that minimizes start and execute times without increasing costs per start/execute.

I have a lambda that has 3GB memory. But I also see the same issue. In the beginning, my process takes about 1.5G but gradually keeps increasing. Sometimes the same input consumes wildly different amounts of memory (leading to terminations). There is nothing random in my code that would cause the memory consumption to vary.

@twopointzero 128mb was ample for the lambda in my example before upgrading it to .NET core 2.0.

I agree that 128mb isn't enough for ASP.NET Lambda Proxies, but this doesn't make use of Microsoft.AspNetCore.App

This has something to do with the container being warm even after the function complete. Instead of return "",nil. just panic it and it would solve the problem. I know it's a hack but it is how it is atm.

Is anyone still experiencing this issue on current versions of lambda?

This issue has not recieved a response in 2 weeks. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled.

Was this page helpful?
0 / 5 - 0 ratings