Per this customer issue:
This is a known limitation and, with the current design, not something that can be easily addressed. I'm adding a comment in the SO question as the root cause is not directly related to redirects.
Is there another way to work around the issues binding redirects are usually used for? For example, I want to use the WebJobs nuget package, which requires 7.2.1 or higher of WindowsAzure.Storage. However, 8.1.1 is out, which updates a bunch of stuff, including adding support for block blob uploads. Then, when I declare a function as taking in a IQueryable<DynamicTableEntity>, I'm stuck - if I reference the 8.1.1 version to get the functionality I want, I get an error at function bind time that it DynamicTableEntity doesn't implement ITableEntity.
Or is there another way to achieve this besides binding redirect support?
Is this worth looking at, @jorupp?
The development experience using this approach has been so much better for me.
@Blackbaud-MitchellThomas - it has the same basic issue - no binding redirects. In fact, I hadn't run into the issue myself until I switched to that approach yesterday and updated all my nuget packages to the latest (since I had the package GUI to tell me about it). I got binding errors because the latest Microsoft.Azure.WebJobs needed an older version of WindowsAzure.Storage, and didn't recognize the 8.1.1 types as matching the 7.2.1 types it had loaded (since there was no binding redirect to force them to both load the 8.1.1 types).
That's good to know that it's still a potential gap. I have a project running in that format, and it cleared up my flavor of these issues. But my problem was that it was erroring looking for a given .dll and couldn't find it, so I had been able to figure out to supply it with the csx file.
Blocked on this work: https://github.com/Azure/azure-webjobs-sdk-script/issues/1319
Could there be a way to work around this temporarily by providing the data on what version of an assembly to use in another form (ie. JSON file) and handling the AppDomain.AssemblyResolve event? Just thinking out loud here.
Is there any possible work around for this? We have just tried to migrate from web jobs to functions, but our application depends on Micosoft.Owin.Security 3.1.0.0, but internally Asp.Net Identity depends on Micosoft.Owin.Security 2.1.0.0. We cannot migrate to functions as we would like to until this is supported.
This is a major issue. How can Microsoft state that Azure Functions are ready for use in production environments with such limitation?
@jorupp that is one possibility we have considered (we already perform a a fair amount of custom resolution). The the work planned in #1319 will address this in a better and more consistent (not Azure Functions specific) way.
In the meantime, an approach that will usually address this need is to place the assembly you wish to use in a bin folder, in the function folder. The runtime will use this location as a fallback when probing for assemblies.
That doesn't work. If a strong named assembly is already loaded into the appdomain by azure functions, it will always use that one before loading a new assembly and will then throw a manifest mismatch error for any strong named assembly
I think this is just a fundamental design problem in how azure functions were thought out. As a suggestion, the best thing I can think of is for azure functions to handle ALL assembly resolve events and have NO assemblies in any internal bin folders, loading all assemblies from byte arrays to ensure a "no load context" for all assemblies. Assembly resolve can then handle multiple versions of the same assembly loaded into a single app domain
Basically prevent any loading of any assemblies by the .net runtime itself, as assembly resolution in .net is a total mess anyway
@jnevins-gcm that (as mentioned) doesn't work in all scenarios, but is a viable fallback for some. Much of what you suggest above is actually how things are handled (for example, private assemblies are loaded from byte arrays, without a load context, and side-by-side loading is supported), the Azure Functions assembly resolver establishes a function scoped context and bypasses the .NET loader in those scenarios, and the bin folder mentioned above is not the application bin folder, but the function scoped one (where assemblies are loaded using the method described above).
The details on how this work are a bit more complex, and there are scenarios where we must load assemblies differently, but if using private assemblies only, the workaround above works as described.
We are working on the longer term solution for this that will provide behavior consistent with a "regular" .NET application, but this is still a bit out.
I saw that work item. It seems like a very very bad idea. You're basically describing building a new remoting protocol.
The workaround doesn't work as described though, unless you're loading all the dlls ahead of time in the bin dirs irrespective of name, in which case one could name each conflicting dll according to its version. Is that the case?
*implementation not protocol
@jnevins-gcm I'll try to put some documentation better describing this process and also the scenarios I'm referring to (I've been meaning to do that for a while).
For the out-of-proc issue, there are a lot of details missing there as well, and we're trying to update it so everything is out in the open and we can get more feedback, but the scope is significantly larger than just trying to run .NET in isolation.
I'll work with @christopheranderson to have more details on that issue so we can better discuss the approach.
I'm trying to reference a NetStandard library in my function and I'm getting:
mscorlib: Exception has been thrown by the target of an invocation. IoT.EuriSmartOfficeFunc: Could not load file or assembly 'System.Runtime, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies
This is probably due to this issue, because actually version 4.1.1.0 is deployed. Is there any way at the moment to get this assembly redirected?
@nickverschueren what version of the runtime are you running against? (Here's how to check)
If you're running on anything below 1.0.10917, please restart your Function App and try again (if you find you're running on 10917, please also retry as it was just released)
After having played with a few small functions, I started porting one of our existing applications that has a component that's a Windows service that runs Quartz cron jobs over to Azure Functions as a precompiled assembly. I ran into an issue where one assembly is referencing log4net 2.0.8 but Microsoft.ApplicationInsights.Log4NetAppender is referencing log4net 2.0.5.
Since I can't do a binding redirect, I guess my options are to recompile the AI appender with 2.0.8, recompile the reference to downgrade to 2.0.5, or just give up and revert to WebJobs?
Recompiling in this particular case isn't a huge problem, but the lack of binding redirect support kind of makes Azure Functions a ticking time bomb when it comes to maintenance and gives me some serious pause. Is there any kind of timeline for when binding redirects will be supported?
+1
Without binding redirects, Azure Functions with precompiled assemblies is basically impossible to use, as soon as you want to use an external NuGet package
After some playing around, I've solved this by using ILRepack/ILMerge, to merge all the problematic dependencies into the main assembly. This seems to have solved the binding redirect issues
NetStandard barely works at all in net461 runtime with or without Azure Functions...so best to not mix two problems and just dual compile your netstandard libraries for 461 as well.
That's a great idea about ILMerging! I had tried this but unfortunately not all dlls are conducive to being ILMerged into your own (use reflection internally etc). Plus you'll want to ILMerge WITH internalize to prevent surface area overlap problems.
I managed to work around my log4net problem by handling the binding redirect programmatically at the start of the function. I learned something new today! The idea is from http://blog.slaks.net/2013-12-25/redirecting-assembly-loads-at-runtime/ and could be adapted to read from XML file, etc. Here I just hardcoded my log4net version to see if it would work, and sure enough it did:
private static void ConfigureBindingRedirects()
{
RedirectAssembly("log4net", new Version("2.0.8.0"), "669e0ddf0bb1aa2a");
}
private static void RedirectAssembly(
string shortName,
Version targetVersion,
string publicKeyToken)
{
ResolveEventHandler handler = null;
handler = (sender, args) =>
{
var requestedAssembly = new AssemblyName(args.Name);
if (requestedAssembly.Name != shortName)
{
return null;
}
var targetPublicKeyToken = new AssemblyName("x, PublicKeyToken=" + publicKeyToken)
.GetPublicKeyToken();
requestedAssembly.Version = targetVersion;
requestedAssembly.SetPublicKeyToken(targetPublicKeyToken);
requestedAssembly.CultureInfo = CultureInfo.InvariantCulture;
AppDomain.CurrentDomain.AssemblyResolve -= handler;
return Assembly.Load(requestedAssembly);
};
AppDomain.CurrentDomain.AssemblyResolve += handler;
}
Now this won't help you if you need redirect something the host has already loaded, like Microsoft.WindowsAzure.Storage. Since the Azure Functions team "owns" these I think it's reasonable to believe they will have a compatibility story for these assemblies going forward.
But the above option seems to be workable for when you're bringing in a legacy DLL that has dependencies that probably won't ever be recompiled.
OMG this is overly complex!
Won't work for strong named assemblies though right?
In my case, log4net is strong named and it works there, but I don't think I have any control over assemblies that the host has already loaded, like Newtonsoft.Json. In my case, looking through my apps that I want to port to Azure, the binding redirects used in these worker roles are typically a few "usual suspects", most often just log4net.
I get the assembly binding redirect but since you're loading a different version than the reference all supports I guess I'm confused why you don't get
A. "The located assembly's manifest definition does not match the assembly reference" for OTHER dependencies that log4net itself has (another different version reference to)
B. You don't end up with two dlls for log4net loaded into the appdomain, ending up with strange behavior for static fields referenced by different code
Can you explain please?
Thanks.
There's only one log4net DLL, version 2.0.8, which since I referenced it in the function itself, will already be loaded by the time I even get a chance to hook into AssemblyResolve. But when I call XmlConfigurator.Configure() and log4net starts looping through my appenders, the assembly resolver notices that Microsoft.ApplicationInsights.Log4Net wants 2.0.5 and can't find it on its own, so it will call my AssemblyResolve event where I hand it 2.0.8 instead (since 2.0.8 is already loaded, Assembly.Load() just returns the existing assembly ... I could also probably just loop through AppDomain.CurrentDomain.GetAssemblies() instead). MSDN says "The event handler can return a different version of the assembly than the version that was requested" so that part seems to be by design (https://msdn.microsoft.com/en-us/library/ff527268.aspx), you are "on your own" for loading an assembly that you think will work. That's about where my knowledge on this ends, though: my logging wasn't working before and now it is.
Interesting. I've done this before but I'm just surprised that you don't get "The located assembly's manifest definition does not match the assembly reference" because of the referenced assembly mismatch. It must be because the initial assembly is loaded into a "no load contexf".
Great idea!!
Obviously still doesn't work for dlls loaded WITH a load context (like dlls referenced by Azure Functions)
@npiasecki is correct, this will work for strong named assemblies as well (as previously mentioned, the runtime actually does some of that internally as well). @jnevins-gcm for reference, pre-compiled assemblies are loaded in the load-from context. Regular function assemblies are loaded without a context.
@flagbug that is a good approach! As previously stated, we're working on documenting some of the options and details about the resolution/load process, but I'd be curious to hear more about your specific scenario.
I bumped into this today in a different way while porting over a different system. I hadn't noticed the Application Insights updated to 2.4.0.0 and my logging wasn't working, but when I downgraded to 2.3.0.0 it started working again.
I understand that the team has plans afoot to address this, either by introducing a new major version of the runtime when the "blessed" assemblies are updated or by separating the host entirely from the function and using some kind of IPC, but in the meantime, am I right that I should be consulting this file as the list of assemblies that have been loaded by the host, and that I shouldn't go newer than the versions listed there?
@npiasecki that works, but you might be slightly better off looking at our web.config here. As the binding redirects ranges in that file ultimately determine whether a assembly you introduce will be redirected to our internal version or not.
I hit this also today with Newtonsoft. I'm trying to run some of the management API stuff in the QueueTrigger function and it's trying to find Newtonsoft 6.0.0.0 at compile-time so I can't even binding redirect it in code. Is there any way to tell the .net core compiler to do the binding redirects for compile-time?
Any updates on assembly redirect?
Its still a while away, likely 6+ months. We're focused on proving the out-of-proc model can perform well with functional parity to the in-proc model by porting our JavaScript support over to it. We haven't started moving C#/F# yet, and these languages are even more challenging because the programming model is richer (you can bind to more types, etc).
There's a possibility that in our .NET core port we can make our existing binding redirects more aggressive, as this would be an opportunity to make some breaking changes. This might help in scenarios where you are trying to use a slightly newer version of a given dependency, but its not the same thing as letting you specify your own binding redirects. We should know whether this change is feasible within the next month or two.
There's no way you can just do this via the simple, surefire implementation? I think most people would disagree with the direction and timeframe you're proposing.
@jnevins-gcm what implementation are you referring to? This thread has a discussion about some workarounds that can help in a subset of cases (e.g. if the assembly is not loaded by the host, such as log4net). Is that what you're referring to?
As you said, the workarounds the don't support different versions of assemblies already loaded into the host's default load context. I can think of two fairly simple implementations (the second would be my preference)
Allow function app package to include an app.config file or appSettings to add custom binding redirects. The pro of this is that it's simple. The con is that you allow the client to use versions of dlls that could potentially break the function runtime itself.
Don't load ANY dlls except WebJobs.Script into the default load context of the function host app domain. In other words, EVERY dll, including all function runtime required dlls, should be loaded into the LoadFrom context. Pro is that this gives full flexibility of assemblies loaded. Con is that it's slightly more overhead to implement. The good news is that most dlls loaded by the function runtime already behave this way because of FunctionMetadataResolver.
I've been watching this thread for a few weeks in the the hope of a resolution and it's disappointing to hear that we're still 6+ months out. I might not be fully understanding the scope of the issue but we've attempted to move our WebJobs to Functions twice now and abandoned it twice.
As soon as we have an external dependency on almost anything newish (which we invariably do) it just falls apart with versioning conflicts. Mostly seems to be an issue with assemblies already loaded internally - or dependencies of these. But also if two external assemblies have dependency on different versions of the same thing. It's particularly problematic with something like EntityFrameworkCore which has dependency on a whole bunch of other bits - a lot of which are newer than the internal bits.
The last batch of updates & preview tooling etc was a huge move in the right direction for us and it's a real shame that we're coming unstuck on this as the Functions model suits our use-case perfectly. As it stands though, for us it's really not usable beyond the simple "read from a queue", "write to a table" style function and it does seem to me that lack of assembly redirection is the main problem. That said, we might be trying to put a square peg in a round hole here and I'd much rather know that it's 6+ months out than be holding on for something which won't appear in the timescale we need it.
Also, it would be ideal to have the FunctionMetadataResolver itself read an app.config from the function app folder and use the binding redirects specified in that file to control the dynamic resolution. That would be great as it would resolve most issues out of the box without any intervention needed by the developer (including the scenario mentioned above by wimagee)
Yes we too have been trying to migrate some of the more complex pieces of code we have to the function model (anxiously awaiting updates). However even trying to reference the Azure management libs will cause the dependency resolver to die out with some conflict on Newtonsoft v6 and v9.
At this point we're staying with a complex, unfriendly powershell implementation because those functions behave correctly. Our options are now limited to implementing the rest calls ourselves completely, and losing the nice structure and types provided by the management libs or leaving in hard-to-test powershell.
@jnevins-gcm Approach 1 puts us in a difficult position when it comes to support - we're very reluctant to introduce a feature that allows users to break themselves in new and obscure ways that are difficult to debug. The particularly painful scenario is where a customer uses a binding redirect that works correctly, and then we make an internal change that works fine against our bits, but does not work against the redirected library. We roll that change out and the customer with the redirect gets broken in prod.
I'll let @fabiocav comment on the second approach you described.
I agree on approach 1 not being a great idea. Accordingly, I'd suggest approach 2 plus specifying a configuration to the FunctionMetadataResolver for wimagee's use case (two referenced assemblies referencing different versions of another assembly) (note though that this specific use case can be handled already today, painfully, but managing AssemblyResolve yourself....but expecting most/all developers to do that is unreasonable)
I echo all of the concerns here. We had plans to move our large workloads to functions, but this issue is a complete showstopper. DocumentDB, Json.net, etc... these are core libraries used in nearly every workload. We have to take updates on them in our own library's to continue moving our own apps forward.
Reading through, I understand that this is a complicated issue, requiring a significant refactor for the permanent solution...which is hopefully is along the lines of isolating the runtime dependencies from that of the compiled function. 6 months though...we really need something we can work with in the short term, even if there is a risk of breaking production code. I mean, we're building this stuff with preview tools after all, this crowd is no stranger to things breaking or changing.
I actually like both options @jnevins-gcm proposed as _interim_ workarounds. In addition to that, I would propose the configuration option to opt-out of automatic runtime updates (or specify a specific version or range). This could mitigate some of the scenarios where automatic runtime/bits updates cause unpredictable and breaking changes to our functions. I realize that has it's challenges as well, however I do think we need to find a balance that gets us this flexibility and still getting the ease of use, auto scale, etc we all love about Functions.
@jnevins-gcm regarding option 2, as you've pointed out, for most of the DLLs we load _in the context of functions_, we already follow a similar approach to what you're describing (actually initiating the load without a context, instead of load-from for assemblies brought in by NuGet package references, for dynamically compiled functions).
Applying that to core runtime dependencies, with the current model, has a few subtle challenges, and would still require some customer/function provided unification logic in the case of types used by function bindings, which is where many customers end up running into issues with mismatches and lack of assembly redirect support. This introduces a high risk of a breaking change to the current behavior, and fragility that may be very problematic, making the runtime susceptible to external breaking changes that would be very difficult to diagnose. We've explored this in the past and landed where we are based on some of those issues.
I do agree that it would be good to have something sooner than what we're planning as a long term solution, and have a plans that are similar to what you've mentioned in your comment here. The improvement would enable redirects to be applied within the scope of a FunctionAssemblyLoadContext, influencing how the metadata resolver loads those assemblies (what version it looks for). This enhancement, combined with some relaxation options, is very safe to introduce and would address one class of issues without requiring custom code.
It's worth noting that there are a couple of different classes of issues related to the binding redirect support, and this thread has a mix of them. The enhancement proposed above, as is, would not automatically resolve issues where there's "type interoperability" between the bindings and the function (e.g. a function that binds to CloudBlockBlob and references a given version of the storage SDK that differs from what the binding uses). We have other work (in addition to the long term work mentioned by @paulbatum) planned to mitigate those issues.
I've created this issue to make it easier to track this moving forward and will be adding details to it as soon as possible (based on the current plans, towards the end of the month): https://github.com/Azure/azure-webjobs-sdk-script/issues/1716
This type of issues breaks the Serverless Architecture Concept. The goal is to execute code easily.
I also struggled 2 months ago trying to make my Azure Functions because "they are the fiuture of WebJobs" and I eventually stopped because of the high level of complexity.
From now onward, I apply a "12 months before use" rule to any new product released to Azure to ensure the product is stable and the feedback from users is positive.
Precompiled C# Azure Functions come with other issues such as "hacking " the VS project to make it work. Creating Precompiled C# Azure Functions is far from a single click on "New Project".
Rudy - parts of the issue still do apply to precompiled code.
Fabio, that's good to hear you are pursuing a shorter term interim solution. Presumably that solution is not also six months out? Basically, the current stage of functions precludes use from many/most non-hello-world scenarios.
@RudyCo Sure, but it's not hard and it's a reasonable workaround. VS2017 Update 3 (still in preview) might have better tooling, haven't checked.
@jnevins-gcm Precompiled C# allows you to set redirects, 99% of which are set automatically via NuGet. I'm using precompiled C# and I haven't found any similar issues. What are you seeing?
I am pretty sure using precompiled c# code doesn't actually use whatever binding redirects you have in your app.config...
@MisinformedDNA for a concrete example (using a "precompiled function"), try referencing the latest version of Autofac and Autofac.Extensions.DependencyInjection. Then call the ContainerBuilder.Populate(...) method. It will compile fine but you will get a runtime MethodNotFoundException. Things like this are forcing me to add extension method shims. Another example can be found with log4net and really any other assembly that doesn't match the versions referenced by the runtime, all of which will result in runtime errors.
Ignore everything I said. My packages aren't as up to date as I remembered. Carry on.
@MisinformedDNA pre-compiled functions do not have redirect support. The load behavior there is no different than references using the shared model (loaded in the load-from context)
A private assembly deployment is a current workaround for a redirect where you have a binding failure, but a bit cumbersome.
@jnevins-gcm yes, the goal is to have that done in the short(er)-term to unblock some of these scenarios.
@anyone, could you please, point me at a code example to implement a workaround? I'm getting errors like one below:
System.MissingMethodException: 'Method not found: 'Microsoft.FSharp.Core.FSharpFunc`2<System.String,Microsoft.FSharp.Core.FSharpChoice`2<Json,System.String>> Json.get_tryParse()'.'
my assembly resolve handler does not catch anything useful.
I hate to pile on, but just as a data point our migration to Functions is pretty much on hold until this can be resolved. We have way too many dependencies to juggle with nuget packages that all want higher versions of libraries, like WindowsAzure.Storage 8.x and being shackled to older versions is a showstopper. I have to agree with some comments above that 6+ months out for a solution relegates Azure Functions to more of a one-off tool for doing simple self-contained operation work, as opposed to what we were hoping for, which was to move parts of our app off of Service Fabric but still leverage out internal core shared library. Hoping this helps prioritize this as I think Functions, especially with the new VS tooling and precompiled workflow, are a sorely needed tool in our toolbox.
This is an area where the .NET mechanism for loading assemblies is simply not designed to support this level of flexibility and control. It's extremely complicated. I worked extensively with Java classloaders on the JVM, and while they have their own issues and are getting an overhaul, they were capable of dealing with this situation.
I believe the situation warrants completely new functionality in .NET, and the engagement of the .NET core teams to discuss it and look at other languages to see what prior art exists. In short, to solve this problem effectively for Functions, or any other future "run code on demand" concepts, developers need the ability to create fully isolated and controlled mini environments. Anything based on App Domains isn't going to cut it.
I followed @npiasecki post to add custom handler to resolve redirect binding issue. It is working locally but not working after publishing to Azure.
I am getting following exception.
System.MissingMethodException : Method not found: 'Microsoft.Azure.Documents.Client.TransientFaultHandling.IReliableReadWriteDocumentClient Microsoft.Azure.Documents.Client.TransientFaultHandling.DocumentClientExtensions.AsReliableSystem.MissingMethodException : Method not found: 'Microsoft.Azure.Documents.Client.TransientFaultHandling.IReliableReadWriteDocumentClient Microsoft.Azure.Documents.Client.TransientFaultHandling.DocumentClientExtensions.AsReliable(Microsoft.Azure.Documents.Client.DocumentClient, Microsoft.Practices.EnterpriseLibrary.TransientFaultHandling.RetryStrategy)'.
So... any progress on this?
Hit the failure while trying to use ADAL auth in Azure Functions:
Microsoft.Azure.Services.AppAuthentication: Could not load fileor assembly 'Microsoft.IdentityModel.Clients.ActiveDirectory.Platform, Version=3.14.2.11, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
The lack of support for app.config is confusing.
Is there a page with all the internally referenced DLLs and their runtime versions. For instance:
WindowsAzure.Storage: 7.2.1
Newtonsoft.Json: 9.0.1
I would recommend you look at the binding redirects in our web.config:
https://github.com/Azure/azure-webjobs-sdk-script/blob/dev/src/WebJobs.Script.WebHost/Web.config
However be aware that at some point soon the dev branch will switch over to tracking our .net core based 2.0 release. At that point there will be a v1.x branch and that is the branch you'll want to check for v1 function apps.
Is there any way to just get the easy stuff upgraded? I've tried several workarounds on assembly binding redirects to no avail.
Problem for me is I need Microsoft.Azure.Documents.Client to be v1.17.0 (currently at 1.13.0) in order to better handle JSON Serialization for CosmosDB. Should be a benign thing.
I'm running into this issue all the time. Lately, Azure Functions QueueTrigger is unable to deserialize CloudQueueMessage because it's using a different Azure Storage version than I am. I've had to move a lot of Azure Functions back to WebJobs and if there isn't a fix for this soon I might have to do a full on retreat. This is looking like Azure Function's tombstone to me.
Is there any progress? I am trying to use lastest Kusto client nuget package, which requires Newtonsoft.Json version > 10.0.3, but Microsoft.NET.sdk.Functions explicitly asking for Newtonsoft.Json version to be 9.0.1. Any workaround?
Would really appreciate an update from the team on this. It's a year now since this issue was raised and there's been nothing since July on even the short-term partial fix which has been bumped along each month #1716 . At that stage it was 6+ months to a full fix - my fear is that we're still 6+ months from a proper solution. If that's the case so be it, but we need some clarity as at the moment we're treading water in the hope that this comes through. We're at the point of deciding that it isn't going to happen even in the medium term and looking to other options for this kind of workload.
Added to the binding issues with .NET Standard 2 libraries (EF Core 2.0 etc) in net462 projects and all the csproj changes we're in a world of pain with .NET on Azure at the moment unfortunately.
Adding @christopheranderson & @lindydonna, can you elaborate on this please or include the relevant people please?
Apologies for the lack of updates here.
Issue #1716 is the targeted fix for some of the scenarios impacting the current runtime version. As pointed out by @wimagee, the issue has unfortunately been bumped due to other competing work, but I am targeting this sprint (or next, at the latest) for completion.
It's worth noting that, as stated on the issue, that addresses some common issues with redirects that would typically require custom code, but not all (the feature will essentially allow for binding redirects that change how probing and loading happen in the function assembly context/metadata resolver).
@wimagee if you're referring to binding issues .NET Standard and functions (and not .NET Standard 2.0 in general); we have updates being deployed with the next release due to happen within the next week to address problems with that.
This is also a high priority item for the 2.0 runtime, and while there hasn't been an update here, work is happening there to ensure this issue is addressed.
so is it correct that you don't support binding redirects and you don't expose something like a palet.lock file with all the deps versions that you have? How can this possibly work?
Good news everyone!
Today I was able to publish pre-compiled function targeting netstandard2.0 using all packages I had problems in past such as:
FSharp.Data 2.4.3
Newtonsoft.Json 10.0.3
Microsoft.Azure.Management.DataLake.Store 2.2.1
WindowsAzure.Storage 8.6.0
and... It worked without trouble!
You have to switch to beta runtime 2.0.11415.0 (beta) in Function App Settings though. But it works and it is great.
Good for you ;-)
But doesn't solve underlying issue that we can pin our deps versions
Am 06.12.2017 16:59 schrieb "Szer" notifications@github.com:
Good news everyone!
Today I was able to publish pre-compiled function targeting netstandard2.0
using all packages I had problems in past such as:
FSharp.Data 2.4.3
Newtonsoft.Json 10.0.3
Microsoft.Azure.Management.DataLake.Store 2.2.1
WindowsAzure.Storage 8.6.0and... It worked without trouble!
You have to switch to beta runtime 2.0.11415.0 (beta) in Function App
Settings though. But it works and it is great.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-webjobs-sdk-script/issues/992#issuecomment-349683694,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNFQ1u47ACn5t7HMB3cXaUN2wxh_fks5s9rnQgaJpZM4K9N-C
.
@fabiocav @forki Can you please update us on when a fix would be available? Can't an alpha be released on nuget updating the dependencies to latest versions?
@shyamal890 There are prerelease nugets already available for Azure Functions 2.x / Webjobs 3.x with updated dependencies:
https://www.nuget.org/packages/Microsoft.Azure.WebJobs/3.0.0-beta4
If you create a brand new function app in Visual Studio and select the (v2) option it will automatically reference these.
That is sooo dangerous without redirects...
Am 18.01.2018 19:04 schrieb "Paul Batum" notifications@github.com:
@shyamal890 https://github.com/shyamal890 There are prerelease nugets
already available for Azure Functions 2.x / Webjobs 3.x with updated
dependencies:https://www.nuget.org/packages/Microsoft.Azure.WebJobs/3.0.0-beta4
If you create a brand new function app in Visual Studio and select the
(v2) option it will automatically reference these.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-webjobs-sdk-script/issues/992#issuecomment-358731206,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNEeqLOFsgQkXVnPrxN6mD3or561gks5tL4fEgaJpZM4K9N-C
.
@forki Not sure what you mean? These are prerelease packages for the next major version.
I mean that it's hard for your users to guess against which versions they
need to compile.
Am 18.01.2018 19:45 schrieb "Paul Batum" notifications@github.com:
@forki https://github.com/forki Not sure what you mean? These are
prerelease packages for then next major version.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-webjobs-sdk-script/issues/992#issuecomment-358742795,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNLCBz1Q2rxttc3yco6_KwFSLs8Coks5tL5EvgaJpZM4K9N-C
.
@paulbatum Hi Paul, I'm not sure what you mean by the line: "_If you create a brand new function app in Visual Studio and select the (v2) option it will automatically reference these._"
If i create a new Azure Function project in VS2017 v15.5.4, it creates it as a .NET Framework project, which has the old, lower dependencies. Where's the "v2" option you mention?
Thanks!
Ciaran
@ciarancolgan This what I see when I create a new Function project.

Make sure you update to the latest version of the tools:

@paulbatum the Tools upgrade did the trick, thanks!
The pertinent question to ask is what are the breaking changes in V2 and does it support .NET 4.6 sub assemblies?
@paulbatum How to upgrade an existing function1.0.7 to 2.0 in visual studio?
@shanfangshuiyuan Function SDK 1.0.7 is the latest and targets .NET Standard 2.0.
Today's Unreasonable Json.Net Challenge: Upgrade Functions SDK 1.0.7 to 1.0.8.
It's a patch release, How Hard Can It Be?™
@MV10 I'm not sure how this moves the conversation forward. Of the problems that you listed here, those that relate to assembly loading are not new. Other problems you mentioned (such as something about assuming names are global) would be better filed as other issues so that they can be investigated. Similarly, if we screwed up our semver in the 1.0.8 release that should be discussed in another issue.
@MV10 @paulbatum I'm not sure any of us know how to move the conversation forward anymore. I've abandoned Functions completely, after investing a lot of time and effort into them, and I get irrationally angry when anyone at Microsoft mentions them. That's been my solution to the problem.
@paulbatum As someone also really affected by this, I'll just say that my experience has been that people tend to complain and become loud on things they really love and want to see improve. That's the case here for Functions.
My company spends $ six-figures on Azure yearly. Last year we went to BUILD (2017) and I went to every Functions talk, spent literally hours on the exhibit floor with the functions team strategizing, and we left sold that we could / should move our workflows to the new compiled functions. We got home, invested over a month of dev time, and then finally realized -- "oh man, we don't have any way to control the dependencies??" I really felt misled to be honest. To be told something is RTM, and then be told over and over we can bring our internal libraries over wasn't accurate. It was huge investment of time, a diversion of other priorities, but we had all the confidence. IMHO Functions marketing right now should have a huge asterisk on it so people don't have to find this thread to learn this limitation.
We've been told here, and in private emails to the team, that a solution was in the works, and we almost did what @MV10 did, which was plow ahead and just accept we'd have to live within the dependencies of Functions for the time being. But very quickly, other nuget includes start to conflict. We had to abandon Function all together and go back to Webjobs and Service Fabric.
We could argue over the tact of that post, but I'd say it's important for folks who are really stuck on this to speak up to help you and your team get "the higher ups" to give you more resources to make this a top priority. We've been told in the past that it is, but then not, as other things are more important and this keeps getting kicked down the road.
I just signed my team up for BUILD 2018 an hour ago -- I'm really hopeful that this year we'll be able make the switch. It'll be a huge cost savings for us, not to mention a simplification away from the frameworks around Service Fabric. No disrespect. ❤️to Functions -- we just want to use them.
I am generally not one to jump on the complain train, but this is a serious issue. We are currently shipping pre-compiled functions to production and everytime we alter underlying dependencies, possibly unrelated to the functions themselves, we risk breaking our functions at runtime. I could live with this, if we had a repeatable and automated way to validate things had not been broken in development, but no such tooling exists yet and there seems to be 0 momentum in that direction (and also no support from MS to help the community develop it). I also am strongly discouraged from building any more services on Azure functions because with each new service, I am increasing my risk proportional the number of services I deploy.
We MUST be able to specify our own dependencies even at the expense of performance. This is possible today running dotnet core on AWS lambda.
Any chance the dependent libraries for Azure Functions with hard version dependencies could just be forked and refactored to use new namespaces?
Yes this. If Functions has to be its own walled off universe, maybe don't take a dependency on the json serialization library everyone else uses.
@scottrudy @BowserKingKoopa the problem here is not just internal use of serialization (and other) libraries in the runtime, but binding and type exchanges. Let's take storage, for example; you wouldn't be able to bind to a CloudBlockBlob from the SDK and use that as expected (which includes passing instances to other libraries and code you do not own) if the type was coming from a different assembly. Same applies to JSON.NET.
@jaredcnance, I know this is a bit different than what you were asking for on the issue you've linked to, but have you explored using deployment slots for validation/staging? That's one of the options you'd have. If that doesn't work, can we move to another issue to discuss the challenges?
@fabiocav a bit different? They are one in the same. The entire purpose of testing is validating I haven't broken anything as I make changes. Now usually tests are intended for validating application logic, but I don't see why they can't also help us out with this issue until a solution is provided. Assuming it would be feasible to run tests on the same runtime as the functions themselves. Also, as I stated in that issue:
runtime binding errors should reproduce in tests. This probably means the test project should be executed by the function runtime
And yes, we are using staging slots today, but I—and I think everyone else here—would prefer to catch issues before we deploy. Suggesting that we deploy and just see if it works is not a solution.
We made a decision last October that this wasn't looking likely to get resolved anytime soon despite assurances that bits would be in the next sprint etc. We went with other options which, while disappointing, was clearly the right decision.
From my perspective I cannot think what could possibly be more critical than this issue and the lack of progress in nearly 18 months leads me to believe that my (and many others!) understanding and expectation of the product has been wrong. From a technical point of view I suspect Functions was always intended just to run simple single function code; data manipulation/cleansing, service integration etc and that our desired use-case of shifting in existing and complex .NET workload is not a priority, and may in fact never be possible under the current implementation. If this is indeed the situation then someone needs to say so and avoid further frustration for those of us trying to beat a round peg into a square hole. Even if it's a very desirable square hole! :)
@jaredcnance by being a bit different, I was referring to my recommendation, not the issue you brought up.
I still don't understand why WebJobs works fine but Functions doesn't. Isn't Functions built on WebJobs? How did Functions go and manage to break a fundamental part of .NET thats always worked?
@BowserKingKoopa The key difference between the two is the fact that with WebJobs, you own the host, while with Azure Functions, you do not and the host configuration is not something that can be modified when using the consumption plan.
You have the ability to run the Azure Functions runtime where you control the host outside of consumption, which would give you full access the the host config and ability to manage binding redirects, but that does take you away from the "serverless" execution and billing model. It is, however, an option that some customers have pursued.
Fabio, do you have any documentation for doing that? Are you referring to
hosting in an App Service plan, or something else?
On 15 Feb 2018 22:30, "Fabio Cavalcante" notifications@github.com wrote:
@BowserKingKoopa https://github.com/bowserkingkoopa The key difference
between the two is the fact that with WebJobs, you own the host, while with
Azure Functions, you do not and the host configuration is not something
that can be modified when using the consumption plan.You have the ability to run the Azure Functions runtime where you control
the host outside of consumption, which would give you full access the the
host config and ability to manage binding redirects, but that does take you
away from the "serverless" execution and billing model. It is, however, an
option that some customers have pursued.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-functions-host/issues/992#issuecomment-366083619,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4ef_-gau8N7-WQibPrZ7dT_VMOuUMeks5tVLAagaJpZM4K9N-C
.
@fabiocav that's interesting. I hadn't realised it would be different running under app service plan rather than consumption. We'd lose a key benefit over webjobs as you say, but still ..
Yes, this would only work with an App Service plan. We have some documentation on that process on the repo Wiki: https://github.com/Azure/azure-functions-host/wiki/Deploying-the-Functions-runtime-as-a-private-site-extension
We make all private site extensions available for all the releases (although, I just noticed that some of the later releases are missing it, so I just updated the last and will make sure the others have it as well). The latest can be found here: https://github.com/Azure/azure-functions-host/releases/tag/v1.0.11535
For the lack of a better word, I'll call this an _"advanced"_ scenario. So it comes with a couple of warnings:
If you do decide to take this route, you're no longer getting automatic runtime updates and you're in full control of the host (you decide when to update, etc.). I would recommend subscribing to this repo so you know when new versions are available and can decide whether you want to deploy the updates : https://github.com/Azure/app-service-announcements/issues
Often times, portal changes are coordinated with runtime updates, so if you rely on the portal, you may want to stay on top of the updates so you have the latest bits the portal may be relying on.
Thanks to @aarondcoleman for the thoughtful explanation of where he is at right now in regards to functions and this topic.
I wanted to say a few more words on this topic given that I manage the team at Microsoft that is ultimately responsible for fixing this issue that blocks so many potential users and is a significant source of frustration.
Firstly, its evident that we have done a poor job when it comes to communicating the status and scope of potential improvements in the area of assembly loading - for that I apologize. Many of you have commented that you were told a while back that "the assembly binding issue is going to be fixed". If this type of blanket statement is what you heard, then we screwed up. Because we've spent a significant amount of time investigation and evaluating potential improvements we can make but none of them were a single silver bullet that solves all the different variations of this problem in one fell swoop.
For example, take https://github.com/Azure/azure-functions-host/issues/1716. Some of you saw us punt this work repeatedly and expressed frustration (understandably). But let me be clear - the function assembly load context referred to in that issue does not apply to pre compiled functions (i.e. written as .cs files, compiled to dlls, typically from Visual Studio) - its used by csx based functions. So that issue describes work that would help a subset of csx cases unless the scope of the work was increased even further to have pre-compiled functions use that code path, a change we'd have to be pretty careful about to avoid regressions. I think people were watching that particular issue and asking "why is this slipping" without the context of whether it would have even helped their particular case (because the issue description did not go into enough detail - our fault).
The reality is that we have to be pretty careful about changes we make to V1. Many changes we thought were innocuous have caused unexpected regressions - for example take https://github.com/Azure/azure-functions-host/pull/2042. This change fixed issues that stopped users from using .NET Standard 2.0 libraries, but it broken our ngen image loading causing significant regressions in cold start performance. The one issue in this repo that has more comments than this one? Yeah it relates to cold start. Fixing this regression took weeks of investigation and ended up requiring us to wait for the .NET framework 4.7.1 update on App Service.
My point is that over the past several months we have become less enthusiastic about making improvements to assembly loading that only apply for V1, only apply for a subset of scenarios, and bear regression risk.
The commitment I'll make to you is that we will make concrete progress on this problem in Functions V2 and you'll see evidence of this before it goes into general availability. We will do in-depth writeups that explain the problem space and outline potential solutions and we'll solicit your feedback. For example, we're optimistic about using the .NET Core LoadContext as part of some of these solutions. Some of these solutions will be implemented before V2 goes GA and for others we'll have done enough homework to be confident that we can add them after GA without risk of breaking changes.
But please note that I said progress - not that we will magically solve every different variant of assembly loading issues that can exist. The fundamental challenges are:
We are going to need more data and your guidance on what specific scenarios to focus on fixing. In particular its not so helpful if you tell us "let me specify binding redirects" because that is not a scenario, its a potential solution to a set of scenarios. To get the ball rolling I spent a bit of time today putting together some truly trivial examples here. I did not think very hard about this format so it probably has issues but I am convinced we need to develop some sort of scenario catalog as it will help disambiguate the different issues at play.
Thanks for your patience. I'll be trying even harder than usual to be transparent about our efforts in this area and how we are prioritizing this work.
/cc @fabiocav to review in case I made any misstatements here.
For clarification, of the 4 challenges you list there, are all of those still going to be true as part of V2 functions? Seems like #1 isn't true for other languages, so it seems less like a fundamental requirement. It seems like C# could be run out of process as well. Obviously there would need to be something to handle the messages and route them appropriately, but the process would be user owned at that point.
@owenneil the assembly load enhancements are not yet present in the V2 bits, so you won't see much of a difference with the current preview.
Running .NET workloads out-of-proc is an option we are indeed considering. Some of the work to pave the way has already been done with the language extensibility model in v2
@paulbatum Thanks so much for that write up. That really clarified some things for me. But yikes. This sounds really bad. I just wanted to run my .NET code in the cloud, and Functions can't do it. Not only is .NET not a first class citizen here, it doesn't really work at all.
I wish that had been presented clearer in the marketing material for Functions. I wish I'd been told "You are not the target audience for this product. You should stay in WebJobs. We do not have a serverless consumption priced solution for .NET yet." Instead I had to figure that out on my own. Painfully. This is the biggest dead end I've ever been down in the Microsoft ecosystem.
Maybe in the future a solution will be found with something like .NET Core LoadContext. Or maybe we just need a new product that provides serverless, consumption priced cloud functions for .NET developers (not for JavaScript, Java, or C# Script developers. For .NET developers.)
@bowserkingkoopa time to look at lambda I guess.
@paulbatum Maybe you can update the function docs to explain this issue? For example, here is a comparison with webjobs where it would be very good to mention this issue: https://docs.microsoft.com/en-us/azure/azure-functions/functions-compare-logic-apps-ms-flow-webjobs Another place where it could fit is in the descriptions about how to write C# functions here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-class-library
For background I was also hit by this problem when trying out Functions. In general I think the biggest issue is communication. I can understand that there are hard technical challenges, but without proper communication it becomes 10x worse. I have had Azure support tell me to use functions to solve a scaling issue repeatedly even after I referenced this issue and how it prevent me to use Functions.
This is a debacle. At what point during the development of Microsoft’s flagship serverless computing platform did the engineers realize it didn’t work with Microsoft’s flagship development platform?
I've been watching this off and on since I first hit this with my log4net redirect problem back in May. At the time I remarked that the lack of binding redirect support was a ticking time bomb. It was fine for apps being written at the time, since I have rarely wanted to experience the pain of updating dependencies for an existing app unless I really, really needed to, but I thought that it would get worse for new projects as the rest of the dependencies moved on with newer versions, and that File > New Project looked more and more frozen in time. I briefly considered changing my Functions to PowerShell as I had also noted what @BowserKingKoopa did, that ironically .NET projects were adversely effected by the host simply being written in .NET.
We seem to have accepted that managing code dependencies is intractable from a maintenance perspective, so we just keep making copies of everything. First we started with bin deployment to at least make most dependency problems application wide instead of system wide, and with .NET Core we've moved up a level, and now we're copying the whole framework around. Then we sprinkle in some binding redirects with a hope and a prayer that some open source author hasn't gone nuts with breaking changes for the sake of "purity" in the latest release of their pet project. It's kind of crazy when you think about it, but rightly or wrongly, you basically just can't write non-trivial .NET code without binding redirect support.
I think the only way forward is to follow the language extensibility proposal and move the "C# language worker" and the functions host into separate processes. Adopt a stable suite of interfaces for the bindings that you support so that the functions host is always working against a stable interface. You can ship NuGet packages (installed with the consumer's code hosted in the language worker) that adapt these interfaces to the ever-changing versions of storage SDKs and other dependencies if you'd like to maintain the (admittedly cool) ability to bind to function parameters types declared in dependencies not controlled by the Functions team, like CloudBlockBlob.
I can't believe I'm writing this, but we've come full circle: we're talking about creating CGI on steroids.
The promise of Functions is amazing: just upload some code that conforms to said interface and poof! a magical runtime takes care of putting it on a box, running it in response to common events, scaling it out, failing over when the box blows up.
I think the Functions team built this system with a simpler intended use case. It was all lightweight functions in the cloud and the emphasis was coding in CSX files in the browser. They were surprised when lots of developers took one look at this platform and realized the potential to eliminate a lot of DevOps work. I mean, why even bother fooling around with containers and Kubernetes and Quartz.NET for scheduling when I can just zip up the darn thing and Azure does it all for me? The stopgap solution to addressing this sudden demand was hastily allowing precompiled functions, and developers clamored to use it, but the hosting model was not a good fit. As with most things with Azure, marketing went a little off the rails with it and the mismatch between what the docs say it does and what it actually does in a real project is very frustrating. Mistakes were made, and it's possible that Functions will end up supporting two different use cases: throwaway functions coded in the browser to meet a extract-transform-load need, and long-term "real" precompiled code that benefits from the hosting model.
My job is maintaining software for a warehouse. Every minute I'm managing dependencies, applying patches, poring over assembly versions, or upgrading code to work in a new operating environment, I'm not providing value. That's why I hope that V1 of the runtime, mistakes and all, will continue to run and be supported for quite some time. And when you move to an out-of-proc hosting model, not just for .NET but for all languages, you should be able to move all those V1 folks to the new hosting model with some middleware that transparently maps your new interfaces to the frozen-in-stone Storage SDK / Newtonsoft.Json dependencies that V1 is using. Then we never talk about it again.
Please keep at it, but I submit that the out-of-proc "language worker" model is the only way forward, and should be the team's primary focus. Fix it for .NET and you'll have a framework you can use for any other language you want to support.
Here's an example of a use case that is broken today (using VS 15.6.1 and Azure Functions Tools 15.0.40215.0):
http://localhost:7071/api/Function1, then I get the error System.Private.CoreLib: Exception while executing function: Function1. FunctionApp2: Could not load file or assembly 'Newtonsoft.Json, Version=11.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'. Could not find or load a specific file. (Exception from HRESULT: 0x80131621). System.Private.CoreLib: Could not load file or assembly 'Newtonsoft.Json, Version=11.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'.Newtonsoft.Json is not being found (?!?), and finally just throw my hands up and walk away from Functions.There's a few "sub-problems" here:
More to the point, it's been my experience that several developers play with Functions for a bit, hit this issue fairly quickly, and then end up not adopting Functions. Right now the sense I get from the community is that Functions is just OK, because the end user has to be fine with staying on older versions of libraries (which is Not Good these days); and Functions is not sufficiently good at meeting the need of putting a simple API over some business logic in a preexisting library.
I agree that the out-of-proc model is the most likely long-term solution; however, there are some additional difficulties for .NET that don't exist (AFAIK) for other languages. Specifically, the bindings for .NET are amazingly awesome, and that very awesomeness is what would make out-of-proc .NET more difficult. At the very least, the each binding would need to be split into in-proc and out-of-proc components, with the communication between the two reduced to something absurdly simple, like a particular (strict) JSON schema, or a series of string name/value pairs. (It would have to be simple because the in-proc binding component would use the Functions host library versions, and the out-of-proc binding component would use the user dll library versions).
But even with this approach, it might not be fully possible. The simpler cases should work fine (e.g., in-proc monitors queue for new messages; out-of-proc does the message deserialization), but there's some advanced scenarios which may not work or may require much more cross-proc communication (e.g., binding a blob name to a field in a deserialized queue message). I don't use any of the more advanced functionality, so I'm not sure how far that rabbit hole goes. Also, bouncing back and forth between the in-proc and out-of-proc binding components could impact performance (especially if multiple bidirectional messages are required to fully bind all bindings). That said, this seems (to me) to be the solution that is most likely to work long-term.
@StephenCleary for the scenario you just described, I assume you did this selecting the V2 preview, as this is not the behavior you get on V1. Be aware that the current V2 code has ZERO behavior for managing this in that it does not even have binding redirects that we own (V1 has these).
We agree that out-of-proc is the long term solution to this problem, and you are correct that the richness of our .NET bindings represents a significant challenge.
We're actively working on the problem. I expect we'll be sharing a more detailed plan in a matter of weeks.
@paulbatum Are you saying this is fixed in Functions 2.0?
@StephenCleary, thanks for sharing your scenario and your always valuable feedback.
As Paul mentioned, a lot of work is happening and we intend to significantly improve this experience in V2. This will include (likely in a preview/limited state when V2 GAs) an out-of-proc execution model.
One thing I wanted to mention is that we're also actively working on documenting and generating lock files with all of the runtime dependencies (starting with V1, and eventually V2), which will help with one of the issues you brought up.
@tomkerkhove absolutely not. Also treating this issue as "fixed" or "not fixed" is too black and white. There are many different scenarios being rolled up here under one umbrella. When we start talking about things being "fixed", it will be on a case-by-case basis.
Would an "easy" workaround be for azure functions to fork json.net at the version they are welded to and put it in a different namespace? It seems to me that a lot of problems people are having is the json.net dependency.
While Newtonsoft.Json may be one of the common ones, it's far from the only one. I've had issues with WindowsAzure.Storage quite a bit too. Not sure how complete a solution that only covers a few libraries would be (and how much you'd have to change even to get that one).
@Ian1971 This has been discussed above already, but the short answer is that this would not help in a bunch of scenarios because these scenarios require an exchange of types (i.e. you ask for JObject in your code, and our code passes you one).
Ok. Makes sense.
So I'm just checking if this issue is same one I'm experiencing so I don't create a new bug report.
I have installed a nuget package RazorLight -Version 2.0.0-beta1 which has a dependency for Microsoft.Extensions.DependencyModel and installs version 2.0.3 however I get the following error when trying to run the function:
Exception while executing function: Function1. RazorLight: Could not load file or assembly 'Microsoft.Extensions.DependencyModel, Version=2.0.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. Could not find or load a specific file. (Exception from HRESULT: 0x80131621). System.Private.CoreLib: Could not load file or assembly 'Microsoft.Extensions.DependencyModel, Version=2.0.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
From what I can research this seems to be because the Azure Functions extension I've installed on VS 2017 (all fully updated) already has Microsoft.Extensions.DependencyModel but at a lower version 2.0.0 which is causing a conflict.
Is my understanding of this correct and is there no current way to fix this or have a work around?
Thanks :)
I'm new to this issue also but I think your understanding is correct. One workaround would be to fork RazorLight and build it yourself against 2.0.0
@mrcarl79 I had similar issue with Microsoft.AspNetCore.Mvc.Abstractions 2.0.2
Same error message. I changed my paket.dependencies like this:
nuget Microsoft.AspNetCore.Mvc.Abstractions ~> 2.0.1.0
and everything works fine right now.
Just lower version with your package manager
Thanks @Szer and @Ian1971 as is often the case an unexpected workaround which is I found a different package which actually works better for our requirements DotLiquid which doesn't throw any dependency errors, which means my V2 function now works again (until next time!).
Just so I understand... if I have a working v1 Function deployed to Azure, say using 1.0.8 Functions Sdk for example. As long as I don't update any of the Nuget packages in the Function project itself or in a referenced project, then can I safely assume it will keep running without issues in the consumption plan?
@giemch Yes. We are as careful as possible to avoid making changes to what assemblies are loaded into V1 functions, such that if you got things working in Azure, they'll stay working. You don't need to regularly pull our latest nuget packages and deploy. If you are planning on updating some of your packages/dependencies, make sure you test that change in Azure in a separate environment to your prod env. This thread is about issues that come up during development or your first publish to Azure. Further questions about changes happening over time, after your app is successfully deployed and running in Azure, should be asked elsewhere.
We have added some information about this issue, details about how the binding process takes place in 1.0 and information about the enhancements being made to 2.0 in this wiki article.
We hope this helps add some clarity. Please continue to use this issue for any feedback/questions about what we covered there.
Hi @fabiocav - what happened to the Wiki page? It no longer seems to be available.... Thanks
@ciarancolgan The wiki link works for me
@brminnick now it works for me again too. How strange. I was getting a 'You do not have permissions to update this wiki' toast message and redirected to the wiki home page. It just magically started working now that I cancelled the toast and tried again. shrugs
@fabiocav
After reading the wiki article, it's not yet clear whether "binding redirect" support will be provided. There are other solutions mentioned (e.g., "Better isolation", "Out-of-proc worker"), but these don't share the same moniker as "binding redirect".
Without "binding redirect" support, there are a number of libraries that simply can't be used in an Azure Functions environment.
As a random real-world example, accessing Google Datastore from Azure Functions effectively can't be done ... it has to be in a webjob instead. Why? Google's dotnet dependencies are published at different times and depend heavily on semver conventions for resolving dependencies. Without "binding redirect" support, you will never have all your dependency versions perfectly aligned.
As Jon Skeet said in his response:
_"assembly binding redirects really are pretty crucial these days"_ (emphasis added)
Can you clarify whether I'm just being too pessimistic here? Will these "solutions" in fact resolve the "binding redirect" problem? If not, how exactly is the proposed solution different from "binding redirect"? What common situations won't be addressed?
I am having a issue, I added this NuGet , when I run the function I get: Could not find the assembly 'Newtonsoft.Json, Version=10.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed'
Please see https://github.com/Readify/Neo4jClient/issues/271
@jboarman you're correct that defining traditional assembly binding redirects is not something that will be directly supported by the runtime, but it would be in the out-of-proc model (as that model would be no different than any other .NET executable you deploy).
It's important to emphasize that, although the definition of binding redirects or access to the config file won't be exposed with the resolution and binding enhancements, the runtime will be handling the assembly unification for you (so your scenario, where assembly versions are not aligned should work without additional configuration). This is very much along the same lines as how .NET Core works, with a few differences to support Azure Functions specific cases.
The goal of this work is to address resolution and binding issues, and eliminate as much as possible the need for user-defined/configured binding redirects. In short, binding redirection will be supported, the runtime will handle unification, but there won't be an explicit configuration defined by the user.
If possible, I'd encourage you to put a concrete sample covering what you've mentioned together and PR that here, we can then dive into the specifics of your scenario, covering how things would work there and that scenario would also be part of the validation/testing we're performing with the enhancements in place.
Hi Guys, just want to first check I'm in the right place.
I'm writing an Azure function which uses Microsoft.WindowsAzure.Storage to check the status of rehydrating blobs when they move from Archive to Cold.
To do this, I need to check the StandardBlobTier field of the blob properties. I believe this is a fairly new property in the Microsoft.WindowsAzure.Storage nuget package and so I need to target the latest version.
I target version 9.1.0 in my project.json but I still get the error: 'BlobProperties' does not contain a definition for 'StandardBlobTier'. What version of WindowsAzure.Storage is automatically being loaded to Azure Functions environment and how do I override it?
@georgeharnwell I think it is Microsoft.WindowsAzure.Storage 7.2.1.0 according to this
Eugh, so I guess the next question is, is it overridable and if so, how?
@georgeharnwell that's the the whole point of this issue :)
(You can't override it)
@Szer - so this SO answer isn't correct? In that uploading the correct version as a DLL in the bin folder doesn't work?
@georgeharnwell I think it is incorrect. You could use newer dll but it will throw MethodNotFoundException in runtime just like in your case.
@georgeharnwell - I'm guessing you've defined your function to take in a reference to the blob (or the blob container/client), right? If you're doing that, the WebJobs SDK is in charge of creating that instance you're passed.
Try changing your code to open the blob yourself via a new CloudBlobClient (even if you have to read the URL/etc. off the blob that's passed in to find the exact blob you need). You may even try logging the assembly-qualified name of the type of the object you're passed in vs. what you get when you create it yourself to confirm what version of things is getting loaded.
I think the SO answer you referenced (and the other approaches discussed in this thread) will only work if your code is responsible for creating the instance - no matter what you do, the WebJobs SDK will always be creating v7.2.1.0 Microsoft.WindowsAzure.Storage objects.
@jorupp - thanks for the answer although I'm not entirely sure I understand. Maybe it'll help if I provide my code snippet:
```
var storageAccount = CloudStorageAccount.Parse(connString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
// Retrieve reference to a blob.
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
//Fill blob properties
await blockBlob.FetchAttributesAsync();
if (blockBlob.Properties.StandardBlobTier.HasValue && blockBlob.Properties.StandardBlobTier.Value == StandardBlobTier.Cool)
{
return true;
}
return false;
```
Is this clear what I'm doing now?
@georgeharnwell, yep, it's clear what you're doing, and my suggestion was to try doing exactly what you've done, so I guess it won't help :(
One of the ways you can set up a function is to accept the CloudBlockBlob object as an argument to your function, which I thought might have been the problem.
Only other thing I can think to try is overriding the normal assembly resolution and loading the version you want from the bin directory (something like https://github.com/Microsoft/BotBuilder/issues/2407#issuecomment-325097648), but for that to work, you'll have to make sure that your compile-time reference to Microsoft.WindowsAzure.Storage is 9.x or higher, you've deployed that to the bin directory (so it can find it), and that you put the code you pasted above in a separate method from where you set up the assembly resolve hook I linked to, which you then call after the hook is in place. This is because all types used in a method are loaded when the method is run for the first time, so you need to get the hook in place before the types load.
Yes, it's ugly, yes, the team knows this situation isn't good (scroll up this thread a bit), and yes, they've assured us they're working on it and that it'll get a bit better in v2 (https://github.com/Azure/azure-functions-host/issues/992#issuecomment-373799381), and much better when they add support for running your code in your own proc sometime after that.
Thanks for your answer @jorupp and it does sound tremendously hacky but I'll look into it and give it a go. I'd really like to retire my web job that is currently performing this so fingers crossed.
Getting conflicts trying to use "Google.Cloud.Firestore" Version="1.0.0-beta03" or any other beta version on "Google.Protobuf" Version="3.5.1". Getting exception Could not load file or assembly 'Google.Protobuf, Version=3.5.1.0. I believe this project uses version 3.3.0 but all the Google.Firestore betas seem to require >= 3.4.1. Any suggestions?
<PackageReference Include="Google.Cloud.Firestore" Version="1.0.0-beta03" />
<PackageReference Include="Google.Protobuf" Version="3.5.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.13" />
<PackageReference Include="WooCommerceNET" Version="0.7.4" />
Apologies in advance if that's a ditto of everything that's been said already, just trying to summarize the current situation and its implication.
Thanks in advance for any clarification!
@ThomasWeiss, take a look at https://github.com/Azure/azure-functions-host/blob/dev/src/WebJobs.Script.Host/App.config We’ve been using it as a guide.
I understand from other threads that your question #1 would be a yes and #2 would be a no, but someone with better understanding should confirm.
@ThomasWeiss
First thing to note is that some of the answers depend on whether you are talking about functions V1 or functions V2.
Second thing to note is that functions V2 is still WIP so the answers might change in the future, while for functions V1 there is no further work happening in this area so the answers are likely to stay the same.
It has two functions that use a newer version of Json.NET. The function UseJObjectInResponseViaStringContent works because no exchange of types occurs. The function UseJObjectInResponseDirectly does not work because it relies on exchanging types.
This situation in V2 is improved - many scenarios that include an exchange of types now work even if you are using a different version of a given library.
For V1, I prefer to point people to https://github.com/Azure/azure-functions-host/blob/v1.x/src/WebJobs.Script.WebHost/Web.config as the binding redirects there are actually the ones in play when you run on Azure.
One a given version of functions goes GA, we try to be extremely careful about any dependency upgrades. Once your code is up and running in Azure, we intend for it to stay that way.
In addition to the Web.config link @paulbatum has linked to, we also have a lock file with all of the runtime's dependencies, and their versions, here: https://github.com/Azure/azure-functions-host/blob/v1.x/src/azurefunctions-v1-paket.lock
Thanks @HMoen @paulbatum and @fabiocav. I should have clarified that I'm using v2 indeed.
Also running into this issue when using the Bot Framework together with Azure Functions. Would be helpful to get this supported.
We had some issues with Azure Functions using the Microsoft,Azure.Devices, Microsoft.Azure.Devices.Client and Newtonsoft libraries, in relation to using AMQP as communication protocol, and I got the advice to use binding redirects using the following link:
https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/bindingredirect-element
I implemented this (using an app.config) in our azure function project. And it worked accordingly, solving all issues. I wonder though - is this a supported feature?
Any update on this? What's the current state of Azure Functions with regard to using the dlls I give it instead of crashing? https://github.com/Azure/azure-functions-host/wiki/Assembly-Resolution-in-Azure-Functions hasn't been updated in a while.
@BowserKingKoopa we've been covering the updates on the preview release announcements. Most of the enhancements mentioned on that document are now live in the 2.0 preview. The vast majority of the binding issues have been addressed, with a couple of additional fixes coming as part of a larger set of changes to address storage dependencies.
Please give that a try and let us know if you run into any issues.
@BowserKingKoopa I've been using v2 with no binding problems.
@fabiocav, Is there a description available somewhere on how to do binding redirects with Azure Functions? Or will it all just magically work if there's a newer version of the DLL loaded then the one explicitly referenced?
@devedse
As I mentioned above we've had our share of binding issues as well - and I got the following link on how to do binding redirects in another thread (in this one: https://github.com/Azure/azure-amqp/issues/110):
https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/bindingredirect-element
I implemented this (using an app.config) in our azure function project. And it worked accordingly, solving all our issues. I don't know though whether this this a supported feature?
@devedse there is no explicit binding redirect support. Redirects and unification are done implicitly by the runtime, using a few different sources of information depending on the model (e.g. your published artifacts, deps files and extensions registrations). So the short answer is that, in 2.0, it should "just work" without any additional steps.
@Joehannus I would like to dig a little further into your question, as I'm surprised that you were able to resolve this type of problem with an app.config within your function app project, but it might be due to me overlooking some important factors.
I would prefer to not dive into the details here as this thread is already very long. Do you have a minimal, standalone repro that demonstrates how adding the app.config resolves the issue? If so, it would be great if you could file a separate issue (in this repo is fine) with the repro details and at me there so I can look further.
@paulbatum the reason I've tried this is that somebody on a different thread gave me that advice (https://github.com/Azure/azure-amqp/issues/110). I've had lots of issues with the Azure.Devices libraries, either to get specific functions to be called, or even to get functions started at all due to binaries that could not be resolved - and after adding the app.config it seemed as if the starting issues were solved. But I've tried so many combinations of packages versions, to get multiple functions to run correctly, that solving the start issues just might have been coincidental, and that it was not due to the app.config.
To test this I've just installed the latest versions of all packages, and also removed the app.config. And the functions still run without issues. So it looks that it indeed was coincidental.
@fabiocav I can't remember any statement that this issue would ever be resolved on V1 and given that binding redirects work in V2, what is still left before we can close this issue?
While the V2 enhancements are welcome, they require that all dependencies be supported under .NET Core. We currently require dependencies that only target the .NET Framework and cannot be used in V2, as well as other dependencies that currently fail to load under V1 due its assembly resolution restrictions, which means that we cannot use either V1 or V2 successfully. We have a (brittle?) workaround for V1 based on AppDomain.AssemblyResolve, but there are still certain things that we cannot do, for example, use an ILogger instead of a TraceWriter in our Functions.
I have an open issue with more details here.
And a PR to the functions-assembly-loading-catalog here.
For the purposes of the discussion on this particular github issue, I think it would be helpful to exclude scenarios that involve a library that does not support .NET Core. If you are curious about the advice I gave @f2bo, see this comment.
For further discussion of scenarios where Azure Functions V2 would run on .NET Framework and allow you to use libraries that don't support .NET Core, see here:
https://github.com/Azure/Azure-Functions/issues/790
@Joehannus thanks for getting back to us! I am really glad to hear that those packages are working for you on Functions V2 without any further configuration/changes.
Just a small update. We are wrapping up changes that will address one of the remaining assembly loading issues that we are tracking for V2 GA which is providing flexibility around which version of the Azure Storage SDK you can reference. The release that will have this change has been announced here:
https://github.com/Azure/app-service-announcements/issues/129
What this change does is move Azure Storage bindings into an extension, similar to everything else. This will allow for multiple versions of the storage extension to exist, each referencing a different major version of the Azure Storage SDK. You pick which version you reference in your function app based on your needs.
I should note though, we will be starting with Azure Storage SDK 9.x and do not plan on publishing RTM releases of this extension that target earlier major versions (such as 7.x or 8.x). We will of course publish a new version of the extension when the Azure Storage team releases a new major version of their SDK.
What this change does is move Azure Storage bindings into an extension, similar to everything else.
@paulbatum
I've just tested FSharp.Core 4.5.2 (newest release) and it works! (ValueOption, Map.TryGetValue)
AFAIK it was pinned in V2 runtime with 4.2.3 version.
Was it upgraded to latest or it works the same way as Azure Storage?
May I just upgrade FSharp.Core or Newtonsoft.Json as I wish?
Regarding FSharp.Core: this was solved in: https://github.com/Azure/azure-functions-host/issues/2881#issuecomment-406781642
With Azure Functions 2.0 reaching General Availability, we have now completed the work to have this issue resolved for production workloads.
We truly appreciate all the reports, repros and patience while we worked on this during the preview.
Really impressive work!
Seems I am having issues referencing functionalities in a Class Library from Azure function project. Has anyone in this thread run across this issue till now?
@pinaki1234 can you please open a separate issue with the details of what you're seeing and your current setup? That would help us better understand the issue you're running into. Thanks!
Most helpful comment
I've been watching this off and on since I first hit this with my log4net redirect problem back in May. At the time I remarked that the lack of binding redirect support was a ticking time bomb. It was fine for apps being written at the time, since I have rarely wanted to experience the pain of updating dependencies for an existing app unless I really, really needed to, but I thought that it would get worse for new projects as the rest of the dependencies moved on with newer versions, and that File > New Project looked more and more frozen in time. I briefly considered changing my Functions to PowerShell as I had also noted what @BowserKingKoopa did, that ironically .NET projects were adversely effected by the host simply being written in .NET.
We seem to have accepted that managing code dependencies is intractable from a maintenance perspective, so we just keep making copies of everything. First we started with bin deployment to at least make most dependency problems application wide instead of system wide, and with .NET Core we've moved up a level, and now we're copying the whole framework around. Then we sprinkle in some binding redirects with a hope and a prayer that some open source author hasn't gone nuts with breaking changes for the sake of "purity" in the latest release of their pet project. It's kind of crazy when you think about it, but rightly or wrongly, you basically just can't write non-trivial .NET code without binding redirect support.
I think the only way forward is to follow the language extensibility proposal and move the "C# language worker" and the functions host into separate processes. Adopt a stable suite of interfaces for the bindings that you support so that the functions host is always working against a stable interface. You can ship NuGet packages (installed with the consumer's code hosted in the language worker) that adapt these interfaces to the ever-changing versions of storage SDKs and other dependencies if you'd like to maintain the (admittedly cool) ability to bind to function parameters types declared in dependencies not controlled by the Functions team, like CloudBlockBlob.
I can't believe I'm writing this, but we've come full circle: we're talking about creating CGI on steroids.
The promise of Functions is amazing: just upload some code that conforms to said interface and poof! a magical runtime takes care of putting it on a box, running it in response to common events, scaling it out, failing over when the box blows up.
I think the Functions team built this system with a simpler intended use case. It was all lightweight functions in the cloud and the emphasis was coding in CSX files in the browser. They were surprised when lots of developers took one look at this platform and realized the potential to eliminate a lot of DevOps work. I mean, why even bother fooling around with containers and Kubernetes and Quartz.NET for scheduling when I can just zip up the darn thing and Azure does it all for me? The stopgap solution to addressing this sudden demand was hastily allowing precompiled functions, and developers clamored to use it, but the hosting model was not a good fit. As with most things with Azure, marketing went a little off the rails with it and the mismatch between what the docs say it does and what it actually does in a real project is very frustrating. Mistakes were made, and it's possible that Functions will end up supporting two different use cases: throwaway functions coded in the browser to meet a extract-transform-load need, and long-term "real" precompiled code that benefits from the hosting model.
My job is maintaining software for a warehouse. Every minute I'm managing dependencies, applying patches, poring over assembly versions, or upgrading code to work in a new operating environment, I'm not providing value. That's why I hope that V1 of the runtime, mistakes and all, will continue to run and be supported for quite some time. And when you move to an out-of-proc hosting model, not just for .NET but for all languages, you should be able to move all those V1 folks to the new hosting model with some middleware that transparently maps your new interfaces to the frozen-in-stone Storage SDK / Newtonsoft.Json dependencies that V1 is using. Then we never talk about it again.
Please keep at it, but I submit that the out-of-proc "language worker" model is the only way forward, and should be the team's primary focus. Fix it for .NET and you'll have a framework you can use for any other language you want to support.