Azure-functions-host: 64bit v3 runtime fails with out memory exceptions where 32bit runtime does not

Created on 31 Dec 2019  路  25Comments  路  Source: Azure/azure-functions-host

We have had a ~2 .netcorev2.2 functions app running for some time. It was running on 64 bit runtime. After upgrading to ~3 functions frequently fail with out of memory exceptions when using the 64bit runtime. The 32bit runtime does not fail.

The exceptions seem to always occur in json deserialization.

Investigative information

  • Timestamp: 12/31/2019, 9:50:18 AM (Local time)
  • Function App version: 3.0
  • Function App name:
  • Function name(s):
  • Region: UK South
  • Invocation ID:
INFORMATION9:50:07.056 AM
Executing 'api-Payments' (Reason='This function was programmatically called via the host APIs.', Id=7e6c2c73-89ac-4881-b4f3-b6de55f23eb7)
  • Applications Insight logging:
// All telemetry for Operation ID: 7cc2acc54214dc439fd9ac8d0c774da2
union *
| where timestamp > datetime("2019-12-30T17:50:18.720Z") and timestamp < datetime("2020-01-01T17:50:18.720Z")
| where operation_Id == "7cc2acc54214dc439fd9ac8d0c774da2"

Repro steps

Switch to 64bit runtime and exercise the API.

Expected behaviour

No exceptions

Actual behavior

Out of memory exceptions raised

Known workarounds

Use 32bit runtime

Related information

  • FSharp language
  • Source available privately
  • HttpTrigger
  • Suspect the issue is with the customized json.net deserialization.

Most helpful comment

We have found an infrastructure issue that is causing OutOfMemoryException, it is affecting functions. We have a repro of the issue and are actively working to find the root cause.

All 25 comments

Clarified that v2 deployment of our functions were using the 64bit runtime without issue.

Some additional invocationIds:
Id=88f5cd78-ab47-4021-888d-cba58dcb258a
Id=cab93b85-7f35-4bb4-8f85-9692a4b70986
Id=3ecb48f2-76c1-4013-806d-3d3a41ebc8a2
Id=8bcc4f55-02d0-4dd1-8891-ac9e541d3819

@jbeeko
Thank you for providing detailed information on the issue. I have a few more questions.

  1. Are you using any bindings other than HttpTrigger?
  2. Can you elaborate on what you mean by Suspect the issue is with the customized json.net deserialization.? Do you suspect customized json deserialization within the function runtime? or is there custom json.net deserialization in your function code?

@soninaren

  1. We are also use TimerTriggers. I did not see any exceptions in suctions that were timer triggered. But they don't run that often and I did not let the app run long in a broken state.
  2. Our code still uses Json.Net rather than the newly introduced System.Text.Json. By "customized" I mean that we use Json.Net with a collection of our own JsonSerializerSettings. Looking at the stack traces is seems the out of memory conditions occur during serialization.

I assume you have access to the stack traces? If not let me know and I can get them to you.

I am having the same issue, I think.

I have an HTTP trigger function that reads data from Azure Table Storage database. This function has been working out in production for almost a year without any issues. After deploying the upgraded (v3) Functions app, I'm observing many occurrences of OutOfMemoryException during the invocation of the HTTP trigger function that reads data from Azure Table Storage. This results in a 500 response. The function does work without a OutOfMemoryException being thrown, occasionally. I can't figure out the difference between the successes and failures.

Good to know that I can switch the app to 32bit. I might give that a shot.

I switched the platform to 32bit and I redeployed my v3 Functions app. I can confirm that everything works fine and there have been no OutOfMemoryExceptions.

@tomfaltesek I did not even redeploy the app. Just changing the setting and restarting will do it.

@tomfaltesek when you look at the stack trace, is the exception in json deserialization by chance?

@tomfaltesek the stack may be a bit misleading in some of those situations. Do you have an app still in that state? Or an app that we can use to repro the issue? If so, @soninaren can attempt to get a memory dump of that app for investigation.

@jbeeko - I had to redeploy the v3 app because I had previously rolled back to the build of my app targeting v2.

Yes -- Some of the stack traces reference different JSON deserialization methods as the "Failed Method":

  • Newtonsoft.Json.Linq.JProperty..ctor
  • Newtonsoft.Json.JsonTextReader.ParseReadString
  • Newtonsoft.Json.JsonTextReader+<ParsePropertyAsync>d__31.MoveNext

@fabiocav - I don't currently have an app in a bad state. I can set up another 64bit functions app in Azure and deploy my v3 build to it. Then I'll share it with you so you can observe. It will have to wait until Thursday, though.

I am going to try and set up a repro for this in the meantime. @jbeeko and @tomfaltesek let me know if / when you have a working repro we can investigate

@soninaren Realistically I'm not going to be able to do that easily. However where I would start is by creating a function that serialized/deserializes to and from a DB using Json.Net. I don't think the DB matters. , we are using Cosmos, Tom is using TableStorage. I'll note that Tom mentioned he had other HTTPTrigger functions that were fine, it is only the one writing to storage that is failing.

@fabiocav , @soninaren I see the same issue with one of the V3 func apps. I can share the function app name in email if you need.

I'll note that Tom mentioned he had other HTTPTrigger functions that were fine, it is only the one writing to storage that is failing.

Just to clarify, my HTTP trigger function that fails with the OutOfMemoryException is responsible for reading (querying) TableStorage, not writing. I have other functions that write to TableStorage without issue.

Another Update

I have a couple timer trigger jobs that are scheduled for every 12 hours that invoke the same method (which queries TableStorage) as my failing HTTP trigger function. Similar to my HTTP trigger function, when on a 64bit platform, the OutOfMemeoryExceptions occur. When on 32bit, they run without issue.

So, this issue doesn't seem to be specific to just the HTTP trigger functions. At the very least, it also exists with timer trigger functions.

How much data are you loading as a result of those invocations and how many concurrent invocations do you typically have? It鈥檚 possible this is a valid OOM situation and switching over to 64 bit tips it over as it would increase memory use. Do you also have a large number of functions in the same Function App? Is this something that was working in 2.0?

How much data are you loading as a result of those invocations?

The query returns 4800 records that are approximately 1.8KB per record. So I'd estimate that the size is somewhere between 8.5MB and 9.5MB. I'm not aware of a quick technique to give you the actual size. Let me know if you have thoughts.

How many concurrent invocations do you typically have?

The endpoint is typically hit several times in an hour. Multiple concurrent requests are rare, but possible. I will note that when a query actually succeeds without the OOM exception, the function caches the results of the query for two hours. All subsequent requests to this HTTP trigger function are then successful (and fast) for two hours. The cached data is a much smaller, filtered subset of the query response.

It鈥檚 possible this is a valid OOM situation and switching over to 64 bit tips it over as it would increase memory use.

Yes, maybe. Although, when this exact same app was targeting v2 and .NET Core 2.2 on a 64bit platform, the OOM exceptions did not occur. I even verified by rolling back to a previous build (targeting v2). This stopped the OOM exceptions.

Do you also have a large number of functions in the same Function App?

I have five HTTP trigger functions and four timer trigger functions in this app.

Is this something that was working in 2.0?

Yes. For almost a year.

I have the same issue as well:

  • There are bunch of function apps that were operating normally for over a year on v2 runtime and .NET Core 2.1 x64
  • Recently I switched them to v3 runtime + .NET Core 3.1 x64 and some of them started to fail with OutOfMemory exception - there were functions with different triggers and different load so for now I don't see any common pattern which may cause such behavior
  • I still face this issue even with FUNCTIONS_V2_COMPATIBILITY_MODE = true in the App Settings
  • With "switching to 32 bit platform" workaround everything works ok

@fabiocav please let me know if you still need some extra info for the investigation.

@fabiocav

How much data are you loading as a result of those invocations and how many concurrent invocations do you typically have? It鈥檚 possible this is a valid OOM situation and switching over to 64 bit tips it over as it would increase memory use. Do you also have a large number of functions in the same Function App? Is this something that was working in 2.0?

The 32 bit runtime is loading over 35,000 records. The 64bit runtime is failing with under 1000. So I don't think it is case of "64 bit tips it over as it would increase memory use."

We have about 20 function in the same app. But none of them are the busy. The aggregate rate is about 2 requests/second.

And yes this has been working for over a year, first in 1.0, then 2.0. Also as mentioned switching the runtime to 64bit will result in immediate failures.

For me, my scenario is a lot simpler and always threw OutOfMemoryException. I have a single TimerTrigger function which downloads a ~55Mb file, compresses it and writes to a blob. This has run successfully for over a year - I upgraded it to Azure Functions ~3 and selected 64-bit. When it fails, it is it is usually at about the 20Mb range on the line await response.Content.ReadAsStreamAsync().

Complete code, no databases or anything complex processing, just a single download from HTTP. Problem goes completely when 32-bit.

var response = await client.GetAsync(uri);
if (response.IsSuccessStatusCode)
{
    await using var uploadContent = new MemoryStream();
    await using var compressStream = new GZipStream(uploadContent, CompressionLevel.Optimal);

    var downloadContent = await response.Content.ReadAsStreamAsync();
    downloadContent.CopyTo(compressStream);
    uploadContent.Position = 0;

    var blob = container.GetBlockBlobReference(date);
    await blob.UploadFromStreamAsync(uploadContent);
}

Simplified version that just read into a memory stream also showed the same problem. Input file has was never more than 60Mb. When 64-bit, always OutOfMemoryException, when 32-bit, works every time.

var response = await Client.GetAsync(uri, HttpCompletionOption.ResponseHeadersRead);
await using var content = new MemoryStream(await response.Content.ReadAsByteArrayAsync());

We have found an infrastructure issue that is causing OutOfMemoryException, it is affecting functions. We have a repro of the issue and are actively working to find the root cause.

A fix for this is currently rolling out. We'll update and close the issue once deployment is completed.

Rollout for the fix is complete now. Closing the issue.

Was this page helpful?
0 / 5 - 0 ratings