Hi
Is this the right repo to expose assembly versioning/binding issues?
If you can help me redirect that would be amazing. I just think the versioning madness needs some exposure with your teams and I am trying to help.
At the moment we are publishing packages in Castle.Core where we have gone with a major version only
strategy using the AssemblyVersionAttribute. The reason we are doing this, is because we have heaps of users on our issue tracker complaining about transitive dependencies where Castle.Core is used for arguments sake by frameworks like Moq where upgrades are chucking up assembly binding errors. They sometimes make the mistake of making Castle.Core an explicit dependency, granted we try steer them in the right direction. However I would like to know what is happening inside the CLR to sort this out as we are hacking our way through the SDK to sort this.
Can you please advise on this, it is a big problem for the open source community.
Many thanks
Could you perhaps give an example that concisely represents the problem?
It is difficult to state concisely. I will do my best.
At the moment all of our versioning is in the format X.X.X. The SDK supports this.
So for example:
Moq 4.7.127 is compiled to work with Castle.Core 4.1.1 on NuGet but if a user is also using Castle.Core features directly they also want to manage that version in a direct way. We release Castle.Core 4.2.0 with some bug fixes/enhancements. Immediately they see that the package is eligible for an upgrade via Visual Studio. If they don't have the correct assembly bindings then it results in a FileLoadException and all the tests stop working because the runtime does not match the new assembly version compiled into the manifest of Moq.
There are a number of work arounds.
Now you are probably wondering what is the problem here? There are workarounds for all of this. You would be right in thinking that, however issue trackers everywhere get littered with people struggling with this problem. I have seen it being referred to as "DLL Hell" many times before.
It feels like NuGet has incompatible versioning strategies to how the runtime deals with this. Is there ever going to be a bit of work kicked off on the runtime side of things to harmonise the two?
A lot of this is managed through the SDK but it assumes that you have upgraded everything to latest and you a fully up to date with the evolving API's to manage this properly. This makes things really hard and I am sure there are some people out there that don't always have the luxury of simply upgrading because they work in restrictive managed environments.
I guess what I am asking for is a better understanding if there is anything on the the roadmap on the runtime side of this to start relaxing these exceptions under certain conditions?
This is an example how the problem propagates through the SDK all the way down to testing: https://github.com/Microsoft/vstest/issues/1098
It feels like NuGet has incompatible versioning strategies to how the runtime deals with this. Is there ever going to be a bit of work kicked off on the runtime side of things to harmonise the two?
I believe this is the core issue concerning the CoreCLR and CLR teams here.
It is very difficult for open-source library authors and maintainers to get versioning right. We need to ensure that each release of a library...
has the right NuGet <version>
that will play well with NuGet's semver resolution & package updating mechanism;
has the right [AssemblyVersion]
so that
TypeLoadException
s et al. at runtime.(The last point is a sad reality these days, because like @fir3pho3nixx said, people don't always use the latest tools, and tools do get it wrong.)
So to keep everything playing together, a library maintainer might do the following in practice:
Be realistic and put NuGet in the front seat when it comes to choosing the right version: These days, I dare say that libraries are brought into projects mainly via NuGet, that is, they don't end up in the GAC via some MSI installer or similar... if there is a GAC at all. They end up in the local binary folder after compiling a project. As far as I can see, the GAC has lost a lot of its former relevance today (but it isn't completely dead yet).
Keep the runtime's assembly strong name matching algorithm from interfering with NuGet. In theory, this could be done by giving each library release an [AssemblyVersion("0.0.0.0")]
. This would effectively disable the CLR's version matching (other parts of the strong name would still be relevant, though). Then the only party doing any effective version matching would be NuGet.
Using a 0.0.0.0
assembly version everywhere and forever would mean you can no longer have different releases of the same library in the GAC. As long as the GAC still exists (at least on the full CLR), this is therefore not a good option.
So, as a compromise, we end up using a very coarse-grained [AssemblyVersion]
scheme such as major-only or major-minor-only. This is unspecific enough for patch releases to share the same strong name (thus assembly binding redirects aren't needed as much and tooling mistakes become forgivable), but fine-grained enough that minor releases can still be put in the GAC as separate entities, if anyone wishes to do so.
I understand that the CLR needs to take the assembly version into consideration for everything that is actually in the GAC. But for everything else (such as DLLs in a project's local binary folder), I wonder: Couldn't the CLR just disregard assembly version numbers? What purpose do they still serve, given that NuGet has already performed version matching before compilation?
(This question is mostly a rhetoric one and doesn't require an immediate answer. I certainly don't want to hijack this issue, so please focus on @fir3pho3nixx' posts above. My own long post is simply intended to shed some light on the same issue from a slightly different angle.)
@stakx - Hijack away. You have filled in a couple of blanks as far as I am concerned. Thanks for posting.
On a side note unrelated to the rest of my reply:
an assembly could be put in the GAC
:arrow_up: Do not do this. As a library author, the presence of one of my libraries in the GAC represents a totally unsupported scenario; I assume that it will break all applications that use the library and assume that this was the intended purpose of placing it there.
Most open source libraries I maintain use the "coarse-grained" approach for the AssemblyVersion
attribute - though I use major.minor.0.0. One downside of this approach involves the behavior when a bug fix is introduced in a patch release - even if the user attempts to use assembly binding redirection, it becomes impossible to ensure that the bug fix will be in effect at runtime.
In addition to coarse-grained versioning, I use a different strong name key for all pre-release binaries. This provides additional stability for clients who reference "stable" releases, since assembly binding redirection will never substitute binaries where the public key token differs.
The last time I put thought into this I ended up writing down the versioning rules as part of a breaking changes policy for a library. While this addressed some questions to my satisfaction, I remain unhappy with the following aspects:
I would love to see the important customizable characteristics of these policies be written in terms of a few options, which could then be fed into a tool that automatically enforces the policy at build time (within some bounds since not all issues can be detected automatically).
:memo: The scariest part of this for me is every time I put a lot of thought into assembly versioning, the conclusion was different in one or more key aspects from the previous time. 😨
@fir3pho3nixx - just so you know, I'm not on the core team. I'm just an interested party.
/cc @terrajobst
I'd just like to add here that on the tooling side (NCrunch), I've found troubleshooting assembly resolution issues under .NET Core to be an absolute nightmare due to the lack of available information when bindings fail.
Under .NET Framework, we had Fusion, which was amazing for its ability to list every attempted resolution and usually give a clear idea what the framework was trying to do.
But now all we get is a binding error for a missing assembly. All the logic for resolving this is baked into the framework itself with no easy way to pick it apart and understand why it's behaving the way it is. Even when people are able to supply me with test solutions to reproduce the problems, they often don't show up because of environmental differences, such as different SDKs installed on the machine or different configuration through fallback directories.
I really think that with a little effort, the reporting of failed bindings could be improved. I expect this would save many people hours of guesswork both inside and outside MS.
This is, of course, assuming that there isn't already some hidden way to troubleshoot these problems. If so, please do share this for tortured souls like mine.
Assembly binding redirects are a sore point. Technically, it doesn't affect CoreCLR (this repo) as .NET Core has a better binding policy and will happily load the higher version. The problem is the CLR i.e. the .NET Framework runtime. There we use an exact binding strategy and fail. There are discussions happening right to see if we can update our binder to make binding redirects unnecessary in the 80% case.
I really think that with a little effort, the reporting of failed bindings could be improved
If you set COREHOST_TRACE=1
environment variable, it will print detailed log on how assemblies got resolved. If this does not help, could you please give us a concrete examples of problems that you had hard time to diagnose?
If you set COREHOST_TRACE=1 environment variable, it will print detailed log on how assemblies got resolved. If this does not help, could you please give us a concrete examples of problems that you had hard time to diagnose?
This is an absolute gem. Thank you so much for sharing this. Is there any way this trace output can be redirected somewhere? When I use this, it always seems to be dumped into the open console window and can't seem to be piped to file. If I can capture this data programmatically, the results would be very exciting.
The trace prints to stdout/stderr only: https://github.com/dotnet/core-setup/blob/master/src/corehost/common/trace.cpp . If you would like to see more options for it, please open an issue in https://github.com/dotnet/core-setup/ .
@fir3pho3nixx thanks for raising this discussion. From the point of view of understanding the pain you, and others, are seeing in versioning and binding I feel we have good insight to start with. From the point of view of your initial question, do you have any additional follow-up items? We currently have a number similar threads tracking the work we are exploring in the space.
@jeffschwMSFT
Are you are achieving harmony between full fat frameworks and dotnet core?
What are the changes you are considering across the issues? Do we have a common standard?
Can library developers override this behavior?
@fir3pho3nixx right now we are gathering scenarios, so issues like these are very helpful. We are app, host and library authors in mind as we explore existing and new scenarios.
@jeffschwMSFT great news. thanks.
here is one that blocks us completely: https://github.com/davkean/maket/pull/2
This is especially a big problem with FSharp.Core
on .Net Core as it's basically a required dep. It's "the BCL" for F# and everyone needs it.
First of FSharp.Core follows BCL versioning rules a.k.a no breakage and stay BC.
Of course at some point newer versions of FSharp.Core are released. Some authors start pushing packages with newer FSharp.Core versions referenced in the assemblies and others still reference older versions, NuGet behavior takes over and unifies on the higher version (unless we pin on an older version ourselves).
Now say you are running inside an environment with a runtime store that only has the lower version, so you need to somehow get this running there.
First you try to do just that: we pin the package reference on an older version in our own project. We restore and get a nuget downgrade error which we then suppress as we know the packages are compatible. The project now at least publishes.
We now have published assemblies and we'll try to run it inside said environment (which only has the old version in runtime store).
loadedversion >= assemblyversion
. You're completely stuck now, there is no solution to this problem except (and they all suck in terms of work):
If this scenario could be accounted for in a possible coreclr loader update that would be amazing.
either BCL treatment for FSharp.Core
I am not sure what you mean by this. The BCL has the same behavior. E.g. if one of your dependencies references say .NET Core 2.1 and you override NuGet to tell it to run on .NET Core 2.0 anyway, you will get the same exception.
The basic loadedversion >= assemblyversion
check is there for a reason. If we removed this explicit check, folks would get much harder to diagnose type load or missing method exceptions.
If you know what you are doing, you should be able to override this check by subscribing to AppDomain.Current.ResolveEvent
, and load and return the lower version even when higher version is requested.
That check is hurting so much right now. And it's not symmetrical which is
really weird.
Jan Kotas notifications@github.com schrieb am Di., 8. Mai 2018, 16:47:
either BCL treatment for FSharp.Core
I am not sure what you mean by this. The BCL has the same behavior. E.g.
if one of your dependencies references say .NET Core 2.1 and you override
NuGet to tell it to run on .NET Core 2.0 anyway, you will get the same
exception.The basic loadedversion >= assemblyversion check is there for a reason.
If we removed this explicit check, folks would get much harder to diagnose
type load or missing method exceptions.If you know what you are doing, you should be able to override this check
by subscribing to AppDomain.Current.ResolveEvent, and load and return the
lower version even when higher version is requested.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dotnet/coreclr/issues/14263#issuecomment-387428035,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNNnSy67S1p46pNBXLIY0cWL11pt3ks5twbADgaJpZM4PpFxf
.
@jkotas
Was meant as to how it's shipped, BCL treatment
would have been better worded as netstandard treatment
something to shield all package authors from underlying implementation versions.
A tooling mechanism to always burn in the 'reference implementation version' for the package/TFM (like netstandard) to allow all those assemblies to flow with your project version as a whole.
It's mostly a tooling problem yes but that's where we are.
Currently any project referencing FSharp.Core just silently upgrades its implicit package reference after the installation of a new sdk (through new targets/props) which creates madness.
EDIT:
Maybe tooling should change TargetFSharpCoreVersion
value based on TFM:
Currently any project referencing FSharp.Core just silently upgrades its implicit package reference after the installation of a new sdk
I agree that this is clearly broken. Installation of a new SDK should not be changing dependencies of your build output.
Is there a issue in SDK or FSharp repos tracking this?
I think not but @KevinRansom would definitely know more.
Most helpful comment
If you set
COREHOST_TRACE=1
environment variable, it will print detailed log on how assemblies got resolved. If this does not help, could you please give us a concrete examples of problems that you had hard time to diagnose?