We're deploying 15 large Javascript Functions (around 20 MB) in a single Function App. They all use Http Triggers. These Functions, once the cold startup is over, execute in <10 secs and Azure monitor show a max memory working set of < 400 MB.
When we send a bunch of queries to the Function, the cold startup is painfully slow (> 1 minute). During/after the cold startup, most of our calls start timing out and I saw in App Insights traces with the following message :
"FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory"
Runtime : v3
Node : v12
Hosting plan : Consumption
All other Function settings are set to default.
Any ideas why this happens and how it can be fixed? We first thought we may hit a Consumption plan limit but we appear way under the 1.5 GB memory / host threshold. The node memory limits should also be higher than the 400 MB max memory usage we've seen. This never happened locally during our tests.
Would deploying a single function per Function App possibly improve the situation by ensuring we'd run a single function per host?
Is this something that started happening recently (may be after a deployment of new functions or a change in the config of the function app)?
If you can share the function app name or the region and timestamp we can investigate the slow cold start. I assume the app is running in windows consumption sku.
Independent of this, the recommendation is to limit the number of functions in each function app to < 5. Having a large number of functions in the same app has an impact on cold start multiple ways including the time it takes to download the content. In addition there are limits on how fast a app can scale out (and other throttles). Putting 15 functions in the same app means the limitation is shared by all the functions making scale out slower than it would have been with smaller number of functions.
cc @mhoeger
This needs more investigation.
@mhoeger - Can you please take a look and move this to nodejs repo if you think issue is specific to node worker?
Hi, the creation of this Function App is really recent so there's no specific change that comes to mind. That being said, it seems to be worse than previous tests we had done using a single function
instead of the 15.
Yes it's running on the Windows Consumption sku. The region is Canada Central, Function name is "zcace-d1-app-ifirm-tax-functions-app" and I've just reproduced it a few minutes ago, around 5/27/2020, 3:41:00 PM EST.
That's good to know, we'll experiment with multiple Function Apps in the mean time.
Thanks a lot!
@FrankGagnon - for slow cold start, I would also recommend making sure that you are using "run from package" set: https://docs.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package
From looking on our end, we do see that your memory usage is high? You can see the same dashboards that I'm seeing by following these steps: go to the Azure portal, navigate to your function app, select Diagnose and solve problems from the left, and search for "Memory Analysis Function App"
While you're running into high memory, your Kudu site (zcace-d1-app-ifirm-tax-functions-app.scm.azurewebsites.net) should also give you an idea of what's going on in a given process. Also, if you want to profile your node process and need to pass it command line arguments, you can do so with the app setting languageWorkers:node:arguments (some more details here)
Thanks for the feedback, we'll test the run from package!
Also thanks for the diag tab, I actually never noticed it. Memory reached pretty high indeed, around 1.3 GB. Is the Javascript head out of memory issue normal at that kind of memory usage?
So we can hook a tool like the Chrome Developper tools for NodeJs to our Function that's deployed in Azure?
In Azure, I don't remember what the status of being able to remote debug is :\ cc: @pragnagopa in case she has more on the latest. This might not be possible today.
You could do take heap snapshots locally or by adding code (something like heapdump)
1.3 GB is fairly high - your app runs 32-bit node by default btw, in case you can find anything documented on current versions (i've seen folks most commonly talk about memory limits for 64-bit node). alternatively - you can use v8 libraries in node). It's also worth noting that the memory footprint won't be purely from the node process, as the functions runtime process (that fetches events, does logging, etc.) is also running next to it. In either case though, +1 to bala's suggestion to break up your function app into smaller units and maybe some sort of memory profiling of your app!
In Azure, I don't remember what the status of being able to remote debug is :\ cc: @pragnagopa in case she has more on the latest. This might not be possible today.
This is accurate. As of now we do not have support for remote debugging for node functions.
Ok it does seem like switching the platform to 64 bits fixes the javascript head out of memory issues. Running a function per Function App diminishes the memory footprint of every Function App so it also seems like we avoid the memory issues that way.
Would you also recommend limiting the number of functions in a Premium Function App? I know the minimum function instances will always be up but I'm wondering if the number of functions will also impact scaling?
Thanks for your response Pragna :)
@FrankGagnon - If you're deploying to a Premium plan, one thing you might want to do is to increase the Node.js memory limit to be a better reflection of the memory available (details here). I would try this first. Breaking up your function app into multiple functions is typically best for non-premium plans where you aren't paying for machines with more memory.
I know you can deploy multiple function app's to the same premium plan, but I'm not sure how exactly the scaling works in this case (if each new scale unit contains all function apps or just the one that needed to scale) - adding @glennamanns who would know best!
For Premium Functions, apps in the same App Service Plan scale independently from one another based on the needs of that app.
However, we do our best to let Function apps in the same App Service Plan share VM resources, if possible, to reduce cost for customers. The number of apps we can pack onto each VM depends on the footprint of each app and the size of the VM.
@glennamanns Does the total size of the function files in a Function App have a great impact on the time it takes to provision a new copy of the App on the same VM? Or on another VM? In other words is the suggestion of @balag0 to use a max of 5 functions per App also valid for the Premium Functions?
@mhoeger Yes I think we can close the issue.
Thank you all very much, your feedback is super useful!
Most helpful comment
This is accurate. As of now we do not have support for remote debugging for node functions.