This is a suggestion, not a bug, so I'm ignoring the template.
Discussed with @mattchenderson, now documenting here.
It used to be that the attributes in .NET functions were optional: one could define function.json file and this would get the functions started. This doesn't seem to work anymore: last time I tried to create Activity functions with just json files without attributes, but the functions aren't discovered by the runtime. The docs say that I can't edit function.json manually, so I assume this is by design.
I came to this need because I find myself defining many functions very similar to each other, having the same binding types but requiring different values in the bindings, e.g.
So, instead of copying the functions over and over again, I'd love to automate this process.
I guess a better meta-description is that I'm trying to use my own DSLs that compile into functions. Examples:
Now, I have to define DSLs and dumb functions calling into it.
Functions used to be static methods in static classes. Now they can be instances. The next step could be creating factory methods: define an Init method or something in Startup configuration to run a factory method which returns List<Function>, where Function defines what's now in function.json.
I see some complications with that, especially if factories are not deterministic. So, instead of runtime, the factory method could be invoked at compile time, it could even be the source for auto-generated function.json files.
I'm looking for the world where 3rd party libraries could compile themselves into function apps. In addition to my examples above, the Function Monkey framework kind of does that today, but it has to do some nasty templating and compile-time roslyn acrobatics to achieve that. I could see things like ASP.NET Core or Saturn compile themselves into functions.
Agreed!
Upon reading the original title, my initial reaction was 'There are better things to complain about'.
But then again, truth be told, all the points are valid and TBH, I always dismantled the attribute decoration approach because it makes it difficult to reuse the code in other compute hosting environments.
Specifically, I've found two ways to re-use the code so far:
Would be amazing to use the attribute decoration method if it were reusable in other scenarios -- otherwise, it's a lock-in model. Workable, but rather constrained.
Great suggestion, Mikhail!
I agree and tend to think of this in two coupled parts: the attribute model itself and discoverability.
Firstly the attributes - they are rigid relying on constants (and this limits flexibility as Mikhail outlined above) and I would further suggest they are downright ugly.
Secondly discoverability - unlike, say, ASP .Net Core there isn't really any runtime discoverability for Functions which means we can't make use of patterns like those outlined by Mikhail and are reduced to more primitive less helpful code constructs. Or doing "tricks" with Roslyn or code template engines. And because attributes are based on constants there is extremely limited compile time discoverability. Its a very rigid model.
Overarching all this - we seem to be in a strange world with Functions and _what_ they are as constructs in .NET. Are they a function? Are they an instance of a class? Do they require JSON? How does that JSON get there? Are they something defined by convention? Or through something more schematic?
In part I believe these issues are emerging because of the lack of separation of concern: declaration has been muddled with implementation via the attributes. (Clearly more widely the issues are also related to the need to support multiple runtimes - but its the expression within .NET we're discussing here).
Removing the attributes presents an opportunity to resolve this: implementations could be required to conform to a more clearly defined declaration and decoupled from the trigger / event semantics. If the triggers and events are then defined externally to the implementation it would allow us to get in the middle with factories and other expressive and productive patterns.
I can see how a fully dynamic approach to this could be difficult to achieve in the short term but I can see how something in the middle of this, akin to ASP .Net Core's startup process, could work yet still provide a lot of flexibility (enabling a many to one scenario for example: these n queues route to this one implementation).
Obviously this isn't a million miles away from what I've done with Function Monkey although it does this building during the compile phase by translating definitions into a .net assembly using Roslyn.
In any case - moving away from attributes would immediately represent a clearer method of defining functions and separate out implementation from trigger binding while also being a step towards a more flexible approach to creating functions.
(Note: I acknowledge I've been very opinionated in Function Monkey and essentially said "you will use a commanding mediator abstraction" which, although its a nice pattern for event based models, might not be appropraite for something more abstract and foundational. But its easy to see how you can generalise that back further to an _IFunction_.)
Amazing, exactly yesterday we've been discussing this with my colleague, complaining about tight attribute dependencies, especially that one defining with function name in it, and all potential workarounds like mentioned above with code generation etc. Then, what a surprise was to get up in the morning, reach out to Twitter updates, and see Mikhail's tweet about this! (I never can get amazed enough about how collective human ideas get born...)
Anyway, up to the topic, I absolutely agree this suggestion is crucial for Azure Functions to become really 'enterprise-development-ready' technology, with intuitive scaling of development efforts. Not just deploying ad-hoc functions for some standalone jobs, but trying to cover business domains with many functions and avoid writing and maintaining boilerplate code. In particular, we are now stuck with what Mikhail described for running event handlers seamlessly on top of Azure Functions. We have a pretty much standard 'contract' for distributing events through Service Bus, and all our functions end up looking very similar setup-wise, with only some pieces of 'business logic' provided. But currently we hardly can find an easy way to write only that piece of real logic and not get distracted by copying/updating one and the same function skeleton for each new use case.
Now its nice to see this issue gets attention, I am sure we end up with a big step forward. Thanks Mikhail for initiating this and everyone for involvement.
@ironpercival - don't want to divert this thread but the issues you are grappling with are why I built Function Monkey that Mikhail referred to above. Might be worth you taking a look. If you've got any questions catch me over on Twitter - as I say, don't want to divert this thread :)
I am using Java Azure Functions myself but changing this back may have a disadvantage. If I correctly remember initially the runtime treated every non static public function as an Azure function that should be exposed or at least it gave weird errors. I am talking about the initial versions for Azure Functions for Java here ~August 2017-September 2017.
What we have now is a few Java classes that have attributes for queues and timer triggers that have no implementation but call into libraries that know nothing about Azure Functions. This code can be tested with normal Java unit tests.
What I understand is that:
How should this work out for JavaScript, Java and whatever language is supported?
@nicenemo If we had a single mechanism for all languages, it would be function.json files or similar. However, the experience is already unique for each runtime, so I don't feel the urge to discuss other languages right now.
There is some level of support for executing .NET based functions without attributes. This happens every time you write a C# function in the portal - you have a csx file containing C# code with no annotations and the binding config lives in the function.json file. What is actually happening in this scenario is there is logic that takes the function.json and generates a .NET type annotated with the attributes that correspond to the contents of function.json. This same thing is happening for all the other languages too. The reason it works this way is that the WebJobs SDK is the core execution model for functions, and the SDK only knows how to execute using .NET types.
Funny story, there was a point in time for Functions V1 where writing a C# function with attributes in Visual Studio would generate a function.json with all the metadata. Then at execution time, the same logic that applies for the other languages would apply here, and we would dynamically build .NET with attributes based on that function.json. Thats right, it looked like this:
.NET type with attributes --> function.json --> .NET type with attributes
It turns out, there were a bunch of bugs. The WebJobs programming model is quite rich, and there were a number of cases where we would fail to fully translate a given attribute to function.json. We looked at these bugs and we realized they were all .NET specific i.e. everything expressable through function.json worked fine, but there were concepts expressable in WebJobs attributes that were not fully expressable in function.json.
We asked ourselves "why are we doing this crazy round tripping - why not just load the user type with the attributes as is?" - when we made this change, it fixed all these bugs. We updated our tooling to work this way by default. Now the function.json file for fully annotated .NET functions just contains metadata that helps our scale controller do its thing. If you look at a function.json file generated by visual studio, it has this line:
"configurationSource": "attributes"
C# functions authored in the portal don't have this setting. If they had this line, it would be set to config instead of attributes. To keep ourselves sane, we have checks that make sure that every function in a given function app has the same configuration source i.e. you can't mix and match.
Feel free to experiment with "configurationSource": "config". You might find it unblocks some interesting scenarios for you. But just be aware that this is not our currently recommended way to author .NET functions. There is a reason we don't present this as an option in the local tooling experiences.
@paulbatum Thank you for the detailed view of why it is as it is.
So, effectively, we need to reproduce your path .NET -> JSON -> .NET, but for a limited set of bindings relevant to each scenario.
Do you see any other path forward to enable some of the scenarios that I & others mentioned?
Right now, I think its that path or full on code-gen. I can't think of many other options..
Long term, things change a bit if we get to the point of running .NET workloads in a separate process i.e. we make them consistent with the other languages. When we do that, we would not be loading the .NET types defined in your code into the WebJobs framework running inside the host process. Presumably, very few people would pick this option if it was less fully featured (e.g. if it did not support binding to a CloudBlockBlob). So this would force us to figure out how to model the richer features that are made available through WebJobs bindings and support them through our gRPC layer. All of that work would lay the foundation for a world that is driven by function.json.
Problem is, I don't know when we'll prioritize this work. Its certainly not at the top of our list right now.
It is extremely painful to require attributes, in particular to describe the names of the queues. Normally, my approach would be to have a type-safe enum containing the names of the queues that the project uses, and then refer to these instances in code.
Requiring attribute decoration to bind the function to the name of the queue makes this approach impossible (because attributes require constants), which in turn makes testing harder. It's very disappointing overall, and a very poor introduction to Azure Functions :-(
Part of the reason we need things like queue names to be constant is that we generate a file that has these queue names in it, called function.json, at build time. This file is included in the deployment package and our scaling infrastructure reads the contents of this file to determine what queues to monitor. If we allowed you to define these queue names with a non-constant, we would have a harder problem to solve in terms of making sure that our scaling infrastructure stayed in sync every time that value changed.
That said I am not sure I understand why you need to use an enum for this? I think enums would have problems when the queue name is not a valid enum name, for example if the queue is called "input-items". Constants would handle this fine, and you still get the type safety that you were looking for. e.g.
public static class QueueNames
{
public const string InputQueue = "input-items";
public const string OutputQueue = "output-items";
}
public static class Function2
{
[FunctionName("Function2")]
public static void Run([QueueTrigger(QueueNames.InputQueue)] string myQueueItem, ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
}
I think this project will still park for a while.
But as a quick improvement, you could create an attribute that can be applied to classes, and it would "mark" all public methods as Function with the method's name. I found it useful to create a separated Endpoints/Triggers class and the actual business logic elsewhere. But every time I have to repeat [FunctionName(nameof(<actual method name>)], which is totally useless, and could be generated.
Hi,
The issue is that constants are baked into referencing assemblies at
compile time, whereas read-only values are pulled from the referenced
assembly.
I also don't ever use enums. I was referring to a type-safe enum pattern,
where it would be possible to create an enum of Queue objects that would be
accessible in code, and be able to use these definitions for the methods as
well. Your current solution makes client code hacky, which isn't great.
Allowing the queue bindings to be deferred until runtime would certainly be
a significant improvement over the hacky mess here.
Thanks.
On Thu, 20 Feb 2020, 2:11 am Paul Batum, notifications@github.com wrote:
Part of the reason we need things like queue names to be constant is that
we generate a file that has these queue names in it, called function.json,
at build time. This file is included in the deployment package and our
scaling infrastructure reads the contents of this file to determine what
queues to monitor. If we allowed you to define these queue names with a
non-constant, we would have a harder problem to solve in terms of making
sure that our scaling infrastructure stayed in sync every time that value
changed.That said I am not sure I understand why you need to use an enum for this?
I think enums would have problems when the queue name is not a valid enum
name, for example if the queue is called "input-items". Constants would
handle this fine, and you still get the type safety that you were looking
for. e.g.public static class QueueNames { public const string InputQueue = "input-items"; public const string OutputQueue = "output-items"; } public static class Function2 { [FunctionName("Function2")] public static void Run([QueueTrigger(QueueNames.InputQueue)] string myQueueItem, ILogger log) { log.LogInformation($"C# Queue trigger function processed: {myQueueItem}"); } }—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-functions-host/issues/4234?email_source=notifications&email_token=AKQGQEYWBMXYQZLRAPHPR2DRDXRFBA5CNFSM4HA4Y7FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMKOUNI#issuecomment-588573237,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKQGQE5LCGF3H76S4SPDNXDRDXRFBANCNFSM4HA4Y7FA
.
You don't need to include the actual names of the queues in your code. We support a syntax where you reference an environment variable e.g.
[FunctionName("QueueTrigger")]
public static void Run(
[QueueTrigger("%input-queue-name%")]string myQueueItem,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
Where an appsetting with name "input-queue-name" contains the name of the queue you want to use.
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns#binding-expressions---app-settings
Hi,
It would be really cool if you could just remove the compile-time
dependency to allow a runtime one instead. This way you're pushing
maintenance issues onto the consumer instead.
On Fri, 18 Sep 2020, 5:19 pm Paul Batum, notifications@github.com wrote:
You don't need to include the actual names of the queues in your code. We
support a syntax where you reference an environment variable e.g.[FunctionName("QueueTrigger")]public static void Run(
[QueueTrigger("%input-queue-name%")]string myQueueItem,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}Where an appsetting with name "input-queue-name" contains the name of the
queue you want to use.https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-expressions-patterns#binding-expressions---app-settings
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-functions-host/issues/4234#issuecomment-694959954,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKQGQE32CLHFVGHL7BWVI2TSGOCCXANCNFSM4HA4Y7FA
.
Unfortunately this is not straightforward because functions needs to be able to monitor your event sources without your code running at all (otherwise we can't scale to zero).
But as a quick improvement, you could create an attribute that can be applied to classes, and it would "mark" all public methods as Function with the method's name. I found it useful to create a separated Endpoints/Triggers class and the actual business logic elsewhere. But every time I have to repeat
[FunctionName(nameof(<actual method name>)], which is totally useless, and could be generated.
Even more, the FunctionName attribute could be optional in general (without my class-attribute idea). You can detect functions based on their arguments too, because at least one ...Trigger argument is required, which is detectable at code generation time, the name of the function can be the name of the method by default.
Hi,
It sounds like what you need is more a piece of executable code that
generates the functions spec with reference to a builder-style api. Then
you could run that code to generate the function specification,
independently of executing the program, but it would allow consumers to
share compile-time or runtime information across the building and function
code.
Thanks,
Adam
On Fri, Sep 18, 2020 at 5:51 PM Paul Batum notifications@github.com wrote:
Unfortunately this is not straightforward because functions needs to be
able to monitor your event sources without your code running at all
(otherwise we can't scale to zero).—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-functions-host/issues/4234#issuecomment-694974423,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKQGQE6N4UZUIZ3H7ULSLTTSGOFYXANCNFSM4HA4Y7FA
.
I'm not sure if my scenario is a perfect match for this, but what I would like to have, is the possibility of re-using a function. From what I can understand, having a FunctionName attribute in a NuGet package shared across services does not work.
We are trying to build an event sourced microservices architecture and therefore pretty much all services (function apps) need the same CosmosDB-triggered event-handler. The Azure Function method itself would not contain any business logic, just the wiring logic mapping from the CosmosDB Document type to the real event object and send the event to the appropriate event-handler defined elsewhere (using DI).
This could be done (at least in C#) by having some way of registering functions (or stating assemblies to scan) in Startup.
To be honest, after thinking about this more, the biggest problem I have is
that the attributes actually dequeue a message from the queue.
In my case, there is a substantial (~1 minute) setup cost to get ready for
processing messages, after which I can process multiple messages per
second. So my code needs to dequeue more than one message per function run,
and it's extremely irritating to have to mix the dequeuing that is
automatically done by the attribute, and the dequeuing done in code. If the
function could be launched when there are messages on the queue, without
dequeuing the message, this at least would be an improvement (although it
doesn't allow for a non-constant definition of the queue, which would be
ideal).
On Tue, Dec 15, 2020 at 7:19 AM Casper Schmidt Wandahl-Liper <
[email protected]> wrote:
I'm not sure if my scenario is a perfect match for this, but what I would
like to have, is the possibility of re-using a function. From what I can
understand, having a FunctionName attribute in a NuGet package shared
across services does not work.
We are trying to build an event sourced microservices architecture and
therefore pretty much all services (function apps) need the same
CosmosDB-triggered event-handler. The Azure Function method itself would
not contain any business logic, just the wiring logic mapping from the
CosmosDB Document type to the real event object and send the event to the
appropriate event-handler defined elsewhere (using DI).This could be done (at least in C#) by having some way of registering
functions (or stating assemblies to scan) in Startup.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-functions-host/issues/4234#issuecomment-745104923,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKQGQE5DJKICIXA3E3WVDNDSU4EYNANCNFSM4HA4Y7FA
.
@Adam4224 it sounds like you could really benefit from the FunctionsStartup functionality provided in NuGet package Microsoft.Azure.Functions.Extensions. It allows your function app to do setup once per host (meaning the first message in the queue would still result in a setup cost, but all other messages (until the host is shut down) will be processed without the setup cost.
Hi,
I read the docs. I was wondering, what are the time and memory limits on
execution of the startup code? Does the time count towards the overall time
limit for function execution?
Thanks,
Adam
On Tue, Dec 15, 2020 at 10:43 AM Casper Schmidt Wandahl-Liper <
[email protected]> wrote:
@Adam4224 https://github.com/Adam4224 it sounds like you could really
benefit from the FunctionsStartup functionality provided in NuGet package
Microsoft.Azure.Functions.Extensions. It allows your function app to do
setup once per host (meaning the first message in the queue would still
result in a setup cost, but all other messages (until the host is shut
down) will be processed without the setup cost.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-functions-host/issues/4234#issuecomment-745206545,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKQGQE3DHSLP3BNTF4KG4HDSU44UFANCNFSM4HA4Y7FA
.
I'm not sure, but I would expect the answer to be that the same limits apply to startup as everything else, but since startup is called before the first trigger is executed it does not count towards the timeout limit of each function. Try using startup in your code and run it locally. You will notice that startup is called right away no matter if there are any messages in the queue or not.
Most helpful comment
I agree and tend to think of this in two coupled parts: the attribute model itself and discoverability.
Firstly the attributes - they are rigid relying on constants (and this limits flexibility as Mikhail outlined above) and I would further suggest they are downright ugly.
Secondly discoverability - unlike, say, ASP .Net Core there isn't really any runtime discoverability for Functions which means we can't make use of patterns like those outlined by Mikhail and are reduced to more primitive less helpful code constructs. Or doing "tricks" with Roslyn or code template engines. And because attributes are based on constants there is extremely limited compile time discoverability. Its a very rigid model.
Overarching all this - we seem to be in a strange world with Functions and _what_ they are as constructs in .NET. Are they a function? Are they an instance of a class? Do they require JSON? How does that JSON get there? Are they something defined by convention? Or through something more schematic?
In part I believe these issues are emerging because of the lack of separation of concern: declaration has been muddled with implementation via the attributes. (Clearly more widely the issues are also related to the need to support multiple runtimes - but its the expression within .NET we're discussing here).
Removing the attributes presents an opportunity to resolve this: implementations could be required to conform to a more clearly defined declaration and decoupled from the trigger / event semantics. If the triggers and events are then defined externally to the implementation it would allow us to get in the middle with factories and other expressive and productive patterns.
I can see how a fully dynamic approach to this could be difficult to achieve in the short term but I can see how something in the middle of this, akin to ASP .Net Core's startup process, could work yet still provide a lot of flexibility (enabling a many to one scenario for example: these n queues route to this one implementation).
Obviously this isn't a million miles away from what I've done with Function Monkey although it does this building during the compile phase by translating definitions into a .net assembly using Roslyn.
In any case - moving away from attributes would immediately represent a clearer method of defining functions and separate out implementation from trigger binding while also being a step towards a more flexible approach to creating functions.
(Note: I acknowledge I've been very opinionated in Function Monkey and essentially said "you will use a commanding mediator abstraction" which, although its a nice pattern for event based models, might not be appropraite for something more abstract and foundational. But its easy to see how you can generalise that back further to an _IFunction_.)