We should expose this since the billing is based on that. Since App Insights is the future, that would be the right place for users to see this, alongside the rest of the data about each execution (duration, logs, ...).
Some of this data is already exposed see:
http://stackoverflow.com/questions/41399678/azure-functions-memory-consumption-unit-usage/41403722#41403722
I think the main ask that came in (internal alias thread) is to get the memory info for a specific function invocation, rather than aggregation over time (though that is good to have as well).
Great timing on this thread.
I was actually looking for a way to pull out the usage to calculate correct cost to pass on to a client.
2017-04-25T17:02:06.131 Function started (Id=12345678-1234-1234-1234-123456789012)
2017-04-25T17:02:06.989 Auth Successful, Hello world! from user 12345
2017-04-25T17:02:07.005 Function completed (Success, Id=12345678-1234-1234-1234-123456789012, Duration=9877ms)
Would it be possible to add an argument to the Function completed line which does something like this (Success, Id=12345678-1234-1234-1234-123456789012, Duration=9877ms, Memory=862MB)
@martell I agree that would be nice. Unfortunately we do not have an accurate measure of per-function memory usage within the runtime component that is emitting these logs.
For your scenario I suggest you create a function app per client and then use the monitor API discussed here to get function execution units per function app and use those as the basis of your cost calculation.
Closing as answered.
@paulbatum Hmm, I don't think this is very satisfying. In AWS Lambda the log contains that information:
RequestId: 1df6eacf-xxxx Duration: 15000.37 ms Billed Duration: 15000 ms Memory Size: 512 MB Max Memory Used: 71 MB
This is really nice transparency for every single request, so I can reproduce how my bill was generated. They don't even need to include the "Max Memory Used", because their pricing model is based on the duration only. So there is a price list for every second of execution time for the different memory sizes I can choose for my Function, so it is sufficient for me to take that price per second of execution for the memory size of the Function and I have my price.
Your pricing model is way more flexible as you don't have this fixed memory size I need to assign to a Function as AWS has. So I cannot calculate a price of a single function invocation at all with the information you provide...
Sure I can use something you somehow already aggregated, but do you see the problem regarding billing transparency?
Makes sense, I'll leave this open as a feature request for per-execution billing information to be exposed.
I would agree this is highly needed. Right now I'm using Metrics to see the cost of a single execution. I've got a function that simply downloads the HTML of a page sends it back as it's response. According to VS 2017 Diagnostic Tools, running this function locally is using all of 81 MB of memory (I am assuming that's mostly overhead, as the snapshot shows a total of only about 200KB data memory usage). However executing the function one time on Azure shows 1 execution of 760.06K memory usage which if I understand the math correctly is approximately 760 ms of 1 GB memory usage. My function returned in 723 milliseconds. So I can only assume that means somehow from running locally to running on azure I have a 1200% increase in memory usage of my function on Azure. It would be great to confirm this in the logs somehow.
@paulbatum do you have any update to this?
No updates at this time. We have received occasional feedback on the desirability of per execution billing info but not enough to prioritize this over other work.
+1 This would be useful
+1
Definitely +1 need to be able to break down costs for clients, and better predict what costs will be when functions scale up through use.
@paulbatum since your last comment, three people upvoted this issue and a similar issue was closed as duplicate. What do you think about the priority of this issue?
@Fabsi110 It is not a high priority for us right now. Our efforts are focused on issues that block adoption of functions (running cross platform / perf / reliability / more language support) and the data we have so far indicates this feature is more of a nice to have than a blocker.
@paulbatum I understand your argumentation and I think all those features are very important. Unfortunately, I think this feature is not just nice-to-have. Nobody can verify your bills right now and this is not appropriate. When you want money from the people, they should be able to understand and verify the cost. Your competitors (AWS and Google) are a lot more transparent and you should follow them.
yep - this is a major blocker and a feature that has caused me to have to use AWS instead of Azure.
Easy to dismiss if you don't understand the use case; as a MSer, you don't have to pay for the service.
But, all of us who have to pay for it REALLY need to know how much it costs to run a function.
@JeffHughes @Fabsi110 thanks for providing your feedback, I'll make sure this issue gets some visibility with product owners.
I completely agree, it is VERY difficult to get any kind of buy in when your asked how much is it likely to cost to run XYZ. and you have to respond no idea, let just do it anyway and we'll know after a month.
You just can't quantify "a function" so quantifying a project that likely to contain several is exponentially harder.
The things you've listed are absolutely important @paulbatum and will be blocking some people adopting functions but I'm pretty sure the next thing on their list of "can we use this" will be cost.
Just to chime in as well, I completely agree with @Fabsi110. It's about transparency as much as letting us be able to budget from some test runs, without jumping through a lot of hoops to try and work out locally what it might cost in reality.
If the memory usage per function is not available, does this suggest that Azure Function's billing implementation is based on accumulated memory usage for a function over time?
In other words, there is a meter tracking accumulated memory usage at the function level (what the graphs show), but not at the call-level?

Thank you!
Very surprised that I have to sit here and try to make an argument that it's helpful for customers to be able to predict their costs and that it's not just a nice to have. This is a vital feature and without it makes it impossible to me to justify Azure Functions to anyone inside the business. In the meantime I'm going to AWS where I can tell my boss how much it will save us to move a bunch of our code to this new serverless architecture.
I understand why this feature is often requested and am not debating its usefulness. I've directly pointed my team at this issue to make sure that the level of interest behind this feature request is understood.
However I personally think that claiming that its absence means you cannot predict your costs is misleading. Here's why:
Is it as simple as running a function and having the UX directly tell you how much it cost? No. See my first comment above!
Any feedback on why the approaches I outlined are problematic is of course welcome.
That's ridiculous! Â
That assumes only PURE FUNCTIONS with exactly the same input and interaction w EVERY system, EVERY TIME!
Even w pure functions (or something close), the variability of running functions wouldn't be within 3 std deviations which would render the estimates useless.
I don't understand how being able to query the system for the cost of an individual function execution helps with that problem. If anything, it backs up my statement that collecting the billing metrics based on a load/performance test is a better approach.
+1. We are a SaaS platform and we need this information to bill our clients according to their functions usage.
Any changes in Functions 2.0?
No changes.
Any plans?
In my humble opinion, Azure billing has always been a problem. A lot of services lack a clear and easy way to verify the costs placed in the monthly bill. Azure Functions is one of these. Please, be more transparent and point out memory usage on every request.
@paulbatum Any changes with the status of this? Would still be very useful
cc @ahmedelnably
One option we have been discussing is enabling the export of per execution billing data to Azure Monitor logs. You could then analyze the data using Log Analytics or take advantage of the extensibility features of Azure Monitor to pump this data to another system. This design is likely to be easier for us to implement than some of the other alternatives we've considered.
One thing to keep in mind is that this would not give you a real-time view of execution cost. There would be at least a few minutes of delay between a function finishing execution and the cost data becoming available in the logs.
If we took this approach, would this address your needs? Please let us know. Thanks!
certainly better than nothing
I don't think waiting for REAL-TIME is worth delaying implementation of NEAR-REAL-TIME.
we just need to know what actions are costing as soon as possible
Agreed, being able to run some test executions and then get the associated cost with a few minutes of delay would still allow for some cost projections to be attached to operations and make it far easier to get estimations and buy in from customers.
Sounds like you're onto a winner @paulbatum
I'm wanting to start a SAAS but being able to give REAL-TIME costing per execution is vital. If REAL-TIME can't be accomplished soon then NEAR-REAL-TIME will have to do but please can the main goal be REAL-TIME.
I don't think waiting for REAL-TIME is worth delaying implementation of NEAR-REAL-TIME. Â
we just need to know what actions are costing as soon as possible
On Monday, July 15, 2019, 04:18:08 PM CDT, Paul Batum <[email protected]> wrote:
cc @ahmedelnably
One option we have been discussing is enabling the export of per execution billing data to Azure Monitor logs. You could then analyze the data using Log Analytics or take advantage of the extensibility features of Azure Monitor to pump this data to another system. This design is likely to be easier for us to implement than some of the other alternatives we've considered.
One thing to keep in mind is that this would not give you a real-time view of execution cost. There would be at least a few minutes of delay between a function finishing execution and the cost data becoming available in the logs.
If we took this approach, would this address your needs? Please let us know. Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
A use case for real time:
In a CosmosDB context, we have sometimes had unexpected spikes in RU usage due to their backend changes. Our automated test suite now validates that this hasn't happened again. This is super easy because they report how many RUs were used on each call.
In other words, financial cost becomes just another part of the API contract and can be automatically tested.
It would be great to support on Azure Functions as well!
+1 that near-real-time is a great start.
In my case I just want to be able to determine whether my function, which process some big files, is going to hit the 1.5 GB memory limit at some point due to the scaling. That's not even about the billing, just about whether my whole concept can work or not.
Also not everybody has the team or the resource to undergo the full QA process described multiple times in this thread, especially just to get a rough estimate instead of an actual value.
Really surprised by @Fabsi110 comment that it has been available for so long in AWS Lambda while it is still not even considered in Azure Functions.
We are examining why we got random spike of memory like what happen here
https://github.com/Azure/azure-functions-durable-extension/issues/886
Azure support keep asking the memory dump which I don't have because the spike happened randomly. If you can make the memory usage per function, we can check which offending function (if there really is) that causing memory spike.
@andrew-vdb So I think at this point, the title of this issue is a little misleading. Most of the folks commenting on this issue are looking for per-execution billing information. This is slightly different than exposing memory usage for each function. The reason its different is that multiple different functions are executed within the same process, and so the memory usage is shared across the functions. In order to provide accurate per-function memory usage info, we would need a different design where each different function executes within its own process. We have no current plans to implement such a design.
Please dont use this issue as a generic dumping ground for billing/memory investigations. Lets keep this on topic.
Regarding the above, please file a new issue here and mention me:
https://github.com/Azure/azure-functions/issues
Any progress on this issue?
Is there any update?
it's the middle of 2020, and we're still looking for a way to do this.
Yeah, it's not like we have 1000's of different ways to monitor code or systems, they have to invent a way to do it, it takes time, even years and thousands of euros to do it
I think it's as per @paulbatum https://github.com/Azure/azure-functions-host/issues/1451#issuecomment-534666096: "Memory usage of a function execution" has nothing to do with Azure Functions billing.
You might find out somehow with profiling but it will not help predict the bill.
I think it's as per @paulbatum #1451 (comment): "Memory usage of a function execution" has nothing to do with Azure Functions billing.
You might find out somehow with profiling but it will not help predict the bill.
I hope I am not misinterpreting what you are saying, but Memory usage and Time are the two metrics used for pricing no?
So let's say I build a Function to do texture packing.
One request could be to pack 2 textures, another could be for 100 textures, they are going to take vastly different times to complete.
With things the way they are now and what Azure support has suggested, is to setup a Function App for each Client.
Ok, that could be automated (with more work on my side). But what if I want to also allow the Public to access my service, I can't setup a function for every person who uses the service.
I would want to have one Function App then charge the users based on what they as an individual user has consumed - this is basically impossible at the moment, unless I have missed something somewhere.
@01010100010100100100100101000111 that's my understanding - it's basically impossible.
For Consumption Plan it seems, the time component of billing is serverless, but the memory component is not. It's the memory used by the server (server process), which might be running many functions, or just one.
The other plans resolve this confusion but unfortunately in the wrong direction - you are simply billed for servers.
@01010100010100100100100101000111 for the scenario you're describing, the outlook is rough. You are correct that as of today, there is no way to figure out how to accurately pass on the cost if you run workloads for different customers in the same function app. But the bad news I have for you is that even if we implemented the feature being discussed in this issue, you would still run into problems if you tried to use it this way. Let me elaborate.
The design of azure functions is based around running multiple functions concurrently within the same process. Memory is of course shared across all executions within that process. This execution model is the same model as most modern web stacks (ASP.NET, node.js, etc) - you have a process that is managing multiple concurrent, overlapped requests. If execution A performs an action that allocates 256mb memory onto the heap, then the memory usage of the entire process goes up, potentially impacting the memory usage component of all executing functions in that process. The calculation takes concurrency into account so you're not paying for the same memory increase multiple times, but it can still impact the execution cost of other functions in the same process.
Trying to state it more simply with an example - lets say you have two customers with an approximately equal workload in terms of computation required, but customer A's workload is twice as demanding in terms of memory compared to customer B. Initially you put these customers on two different function apps. The monthly gbsec bill for function app A is twice that for function app B (because it uses twice as much memory). Next, imagine that you combine these two workloads into a single function app, and try to use per-execution billing metrics that we emit (the feature this issue is discussing) to calculate the bill. It is highly likely that you would find that your calculated bill for customer A would go down, and the bill for customer B would go up. How much depends on the usage patterns (are customer A and customer B using your service at the same time? Or do they run daily jobs at different hours of the day?).
All of this is the result of us choosing a memory efficient model for functions (multiple executions share memory). I really think you need to stick with multiple function apps if you want to rely on our metrics to figure out the cost to pass on.
@ishepherd I disagree somewhat with your comment about the memory component not being serverless. Its a subjective point, so take it with a grain of salt, but I think a model where you have to choose how much memory you allocate up front is less serverless than a model where you are billed dynamically based on your memory usage (the Azure Functions model). You are correct of course that there is no memory isolation per execution.
@paulbatum Thank you for the detailed response, I was wondering about how memory was consumed - what you say makes sense.
So ultimtely, I need an automated way to create and deploy a function app for each customer? So say I have 10,000 customers, from your point of view, having 10,000 functions would be OK? I just want to check as it sounds like it may be tricky to manage that number of functions, or should I be looking at the whole setup differently (for what I previously described)?
@01010100010100100100100101000111 Yeah, if you are going to rely on our billing metrics to determine what to charge your customers then you would need automation that creates a function app per customer. Once you're talking about 10K function apps, you would probably want to distribute those across multiple subscriptions. The other way to approach the problem is to come up with your own metrics to determine what to charge the customer, rather than relying on our metrics. In your texture example, you could be emitting metrics based on how many textures are uploaded, and of what size, and have your billing system consume that. Anyway depending on your scenario, you may find that this approach is easier than implementing all the necessary automation to manage hundreds or thousands of function apps.
Most helpful comment
@paulbatum I understand your argumentation and I think all those features are very important. Unfortunately, I think this feature is not just nice-to-have. Nobody can verify your bills right now and this is not appropriate. When you want money from the people, they should be able to understand and verify the cost. Your competitors (AWS and Google) are a lot more transparent and you should follow them.