Runtime: Handling p/invokes for different platforms and discussions about dllmap

Created on 5 May 2015  路  207Comments  路  Source: dotnet/runtime

Right now, coreclr has no good way to handle the differences between platforms when it comes to p/invoking native libraries.

E.g. imagine I want to call an openssl function, on Unix the library is called libssl.so/dylib but on Windows it is libeay32.dll (there are countless other examples, e.g. libz.so vs zlib1.dll).

corefx "solves" this by conditionally compiling the p/invoke code for each platform with the correct library name. This seems like a major organizational overhead though and is brittle when new platforms are added.

Mono does two things to make this much easier:

  1. It automatically probes for variations of the string passed to DllImport, e.g. if I specify DllImport("myFoo") it tries myFoo.dll, myFoo.so, libmyFoo.so and so on. In the happy case this is enough to find the library on all the platforms.
  2. DllMap: this is for cases where the library name is just too different, or if only a certain function should be redirected, or only on some specific platforms.

In my opinion, coreclr should implement something along those lines. I know that using DllMap 1:1 is not really possible since there's no System.Configuration stack in .NET Core, but it should at least guide an alternate implementation.

What do you think?

edit I proposed adding DllMap attributes in https://github.com/dotnet/coreclr/issues/930#issuecomment-100675743 :

[assembly: DllMap("foo", "foo.dll", OSName.Windows)]
[assembly: DllMap("foo", "libfoo.dylib", OSName.OSX)]
[assembly: DllMap(Dll="foo", Target="libfoo.so", OS=OSName.Linux)]
[assembly: DllMap(Dll="dl", Name="dlclose", Target="libc.so", OS="FreeBSD")]
area-Interop-coreclr port

Most helpful comment

On Linux I found a practical workaround for this issue, which does not involve modifying the DllImports at all.

The idea is to generate a stub shared library that is named like the Windows DLL and contains a reference to the real library name for Linux. The DllImport in .NET code remains unchanged and uses the Windows DLL name (without .dll suffix). The .NET core native loader will then load the stub shared object. This action invokes the Linux dynamic loader (ld.so) which then resolves the dependency of the stub on the real library and automatically maps all symbols from the real library into the stub.

To generate a stub library do the following:

touch empty.c
gcc -shared -o libLinuxName.so empty.c    
gcc -Wl,--no-as-needed -shared -o libWindowsName.so -fPIC -L. -l:libLinuxName.so
rm -f libLinuxName.so

The result can be checked using the readelf command:

$ readelf -d libWindowsName.so
Dynamic section at offset 0xe38 contains 22 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libLinuxName.so]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000c (INIT)               0x4b8
 0x000000000000000d (FINI)               0x600
...

Each stub is very small (8 KB) and thus can easily be included in cross-platform NuGet packages.
It is probably possible to generate the stub library with a simple invocation of ld since no compilation is actually involved, but I was not able to figure out the correct arguments to do that.

A real world example of this technique can be seen at https://github.com/surban/managedCuda/tree/master/StubsForLinux

All 207 comments

cc @stephentoub

Related to dotnet/runtime#4032

myFoo.so should never exist. All tools I know enforce the naming convention of having a lib prefix for every shared library. On the other hand things can be even more complicated by the ABI version suffix, i.e. libFoo.so.0.6.

@janhenke in theory yes, Mono just tries a few of the combinations that make sense.

This is an interesting problem to tackle. I鈥檓 curious what is your experience with DllMap? What works well and what doesn鈥檛?

As long as we keep the assembly load context hook so that the host can decide how this works then I'm happy. I think the core for should try to reduce as much policy/configuration as it can ootb (this reeks of binding redirects).

Doesn't the host also let you configure paths for dll imports? Maybe that can be extended to allow probing for more stuff (different extensions etc).

@sergiy-k I'm not really an expert on DllMap, but I haven't had too many problems in my limited use. The main problem I see is where to put the stuff (.config files don't exist anymore in this new world). I'd be happy with a first step that just probes for different variations if a lib can't be found by the verbatim string.

@davidfowl how would the application hook into that?

One more thing, some of the libs in aspnet avoid dll import and instead load libraries using another A library provided by the dnx to help locate the native lib:

https://github.com/aspnet/KestrelHttpServer/blob/dev/src/Microsoft.AspNet.Server.Kestrel/Networking/PlatformApis.cs

And the usage:

https://github.com/aspnet/KestrelHttpServer/blob/dev/src/Microsoft.AspNet.Server.Kestrel/Networking/Libuv.cs

How would the app handle it? I have no idea. I don't see the core clr as an application model. I see it as a building block with configurable policy and a nice hosting API.

There will be a way to express native dependencies in project.json and in nuget packages. The dnx host will hook the event and do something clever :)

The main thing to keep in mind for this is compatibility.

This means the solution needs to assume you are running a compiled .Net application (.exe/.dll) and that it has hard-coded references to library names that work on windows. Adding platform specific class abstractions will not work in this scenario. The .Net binary is already compiled and source may not be available.

A good real world example is calls into the C runtime to "memcpy". The library name is entirely different on windows than other platforms "msvcrt.dll" versus "libc.so". Name mangling wont fix it and hard coded substitution presents a maintenance burden to keep up with every new runtime version release.

With regard to name mangling, I would prefer to see a single, known and documented substitution take place: "MyFoo.dll" -> "libMyFoo.so". Tying more than one substitution as Mono does can open the door to file name collisions. The main issue present with this approach is case sensitivity. .Net code assumes a windows platform and is not case sensitive as unix is. So "MyFoo.dll" and "myfoo.dll" are not the same.

Taking all the above into account, the only realistic solution is a "DllMap" style configuration file that can be added alongside an already compiled binary.

For completeness, the proper way to specify a DllImport is to name only the library file name without extension. So the above example would be written as [DllImport("MyFoo")]. The platform specific suffix (.dll/.so) is added by the runtime. This would imply the correct behavior would be for the runtime to also add the platform specific prefix (lib). Anything beyond this would need to be specified by the "DllMap" style solution.

One other comment regarding .so versioning: This is usually solved by providing a symlink to the explicit verison named library.

.Net code assumes a windows platform and is not case sensitive as unix is.

I do believe this isn't the case. .Net is perfectly case-sensitive, but it's only the underlying filesystem implementation in Windows that's insensitive. I'm also pretty sure that the CLR doesn't assume anything, and even if it did, CoreCLR is built from the ground up to not assume these kinds of platform-specific things.

@OtherCrashOverride

This means the solution needs to assume you are running a compiled .Net application (.exe/.dll) and that it has hard-coded references to library names that work on windows. Adding platform specific class abstractions will not work in this scenario. The .Net binary is already compiled and source may not be available.

This isn't a concern as existing assemblies don't work on .NET Core anyway and need to be recompiled.

I agree that whatever the solution is it shouldn't rely on too much magic and be properly documented.

Let me expand that a bit:
[The authors of existing] .Net code assumes [operation on] a windows platform and [with a file system that] is not case sensitive as unix is.

This is specific to pre-existing .Net binaries that are authored by 3rd parties for which the option of recompiling is not available. Its the reason Mono implemented "DllMap".

Pre-existing binaries are not limited to just legacy. Its possible for someone to author new .Net Core assemblies that P/Invokes the C runtime today. Its also possible that this assembly is 3rd party and released without code.

Would it be possible to allow the library/module name of DllImport to permit a static readonly allocation somehow? That way we can point to where the library should be loaded in a class's static constructor. Right now attributes parameters are compile time things so it would be a problem. However it would avoid the need of an external .config file which apparently does not exist in .NET Core.

One other comment regarding .so versioning: This is usually solved by providing a symlink to the explicit verison named library.

The key word here is "usually". While it holds true for many libraries, it is not really a formal requirement. You do have to expect encountering a shared library with a symlink to libFoo.so. We should neither enforce creating such symlink just for CoreCLR nor break in that case.

Overridable AssemblyLoadContext.LoadUnmanagedDll method was designed to allow customization of the DllImport target loading. The method gets the dllname and returns the OS handle. The user supplied algorithm can probe for .so versions, use dllmap-style binding, or do something clever like dnx ;-)

It would be nice to publish sample that implements dllmap using AssemblyLoadContext.

@jkotas That's nice, but I still think there needs to be something simple built in. E.g. dlopen() (which is what LoadUnmanagedDll boils down to) is implemented in libdl on Linux and OS X, but FreeBSD implements it in libc. I don't think every application should have to deal with that.

@akoeplinger Couldn't that be implemented in the PAL, or am I misunderstanding?

@akoeplinger It sounds reasonable to add a helper API on AssemblyLoadContext for it. dotnet/coreclr#937

I concur that whatever solution is presented needs to be 'built-in'. Its important that it be deterministic and consistent for everyone. If everyone 'rolls their own as needed using hooks', then library providers will not be able to offer support for anything other than their own 'loader' which may not be compatible or offer the features of a customers proprietary 'loader'.

+1 for a simple convention based plugin -1 for anything like dll map. The core clr should keep the light weight hosting model without inflicting policies and config files. Let the host implementation do that. Corerun/core console could bake something higher level in

I have made a library for this, but it is not supported by AOT since it generates the implementation in runtime using Emit.

As a principle of encapsulation, if coreclr is the one implementing and defining the behavior of DllImport, then it is also responsibility for allow configuration of it. The host, (corerun, etc) should have no concern over the operational details of something that is outside its scope. This is a coreclr concern, making it a host concern is just "passing the buck" to make it "someone else's problem" instead of having to deal with it.

The need is that someone that is not a developer can xcopy a project to a target. If that target happens to be non Windows, there is a simple and trivial method for them or a library provider to override a hard coded library name assumption. This must work deterministically and consistently regardless if corerun is host or the runtime is hosted as part of a larger application.

This must work deterministically and consistently regardless if corerun is host or the runtime is hosted as part of a larger application.

Sensible defaults is always good and I think everyone is on board with that. Policy should not be baked into the CLR, and if it is, it needs to be minimal and overridable by the host.

Its impossible for the host to anticipate the usage or intent of DllImport in 3rd party libraries. Therefore, there will never be a need for the host to override behavior. Its entirely outside the scope of concern for the host.

This is a real world example for discussion:

[DllImport("msvcrt", EntryPoint = "memcpy", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
public static extern IntPtr Memcpy(IntPtr dest, IntPtr src, UIntPtr count);

The presented solution of "let the host worry about it" means every host would require knowledge that memcpy is not in msvcrt.dll on linux. Instead, it should load libc.so. Mono does include built in knowledge that msvcrt should be mapped to libc on linux, so this mapping is never required by the end user or author of the library using it.

The presented solution of "rewrite the library to use an abstraction" has considerable impact for authors of libraries (code refactoring and testing) and places the burden of platform knowledge into every library using DllImport.

The "DllMap" solution is the only one presented so far that meets everyone's needs:
1) [assembly name].[exe | dll].config allows per library rather than global overriding.
2) Overrides can be specified per entry point for systems like FreeBSD where a function may exist in a different library.
3) End users and integrators can author the mapping as needed.
4) New target platforms do not require libraries to be re-authored.
5) Existing code does not need to be re-authored.

Just to throw something more specific into the mix, the dnx is a core CLR host, it doesn't use corerun or core console. Instead, it uses the native host APIs directly and hosts the CLR in a native process. As part of that bootup sequence, we hook AssemblyLoadContext.LoadNativeDll and have a way to load native assemblies from NuGet packages. I'm not sure I want dll map at all in that situation since as the host, the app model may not even choose to expose the same primitives.

PS: It reminds me of binding redirects, which nobody wants ever again :smile:

So how does DNX cope with the above mentioned example of memcpy?

I should also point out that

[DllImport("libc.so", EntryPoint = "memcpy", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
public static extern IntPtr Memcpy(IntPtr dest, IntPtr src, UIntPtr count);

Is equally valid C# and works on Linux, but not on windows. A 3rd party library could contain either. So this is not just a "other platform problem". Its also a Windows problem. Is DNX prepared to handle that mapping?

No but I said it's ok to have something extremely basic in the core clr for something as low as memcopy. Then again if your library is calling memcopy or uname or anything directly then it should detect where it's running.

Btw there's a memcopy method on Buffer now which is a better abstraction for that specific call.

If writing my own loader is the solution at release, then I will. So will others. Some will follow Mono DllMap. Others will not. In the end there will competing 'standards' causing frustration for library providers and customers. This is why I feel its important to resolve this early. If I can only pick one platform when using DNX + P/Invoke, then its going to be Linux.

You can write your own loader anyways. Look at the current ecosystem on windows. That's they do it (with an API call).

I just think they are different ways to write and distribute libraries, where nuget is going to be the main way to do that (the 80% case anyways).

We're talking about library authors here right?

Maybe we should call out the scenarios:

  • a native library available on the os (libc)
  • a native library that is compiled and made available on the LD_LIBRARY_PATH
  • a native library that is brought along with the package

Its not exclusive to library providers. Its just a real world example. It applies to CoreCLR as hostable runtime both with ASP.Net and outside. There are a lot of scenarios where NuGet is not wanted, so a solution must be independent of that. CoreCLR is minimalistic placing a greater importance on 3rd party libraries both managed and native.

Sure that's why I called out the scenarios instead(the dnx isn't asp.net specific either but I won't get into that). I don't want to rathole but thinking of the core clr as a mono replacement, that might be where there's confusion. Coreclr.dll/so/dylib is an API and Corerun is a native executable that uses that library. In the scenarios where nuget isn't wanted are you using Corerun? If so then sure bake that policy into that specific host not the library itself.

A Mono replacement is exactly the way of thinking of myself and many others. Mono is architected in the same way that it has a host application and a runtime library that can be embedded in a different host application. Personally, I anticipate using Corerun far more than DNX. However, currently neither offers a solution to the 'memcpy' dilemma (the goal is not actually to copy memory, rather it is just an example of a worst-case issue. Therefore, a memcpy alternative is not what is needed).

The world will not end if "DllMap" functionality is not included in CoreCLR. As mentioned before, I can make my own host app or even alter the runtime myself to support what I need. Unfortunately at that point, it becomes 'my product' and I have to maintain a fork that is incompatible with 'your product'.

I do ask that if there are no plans to offer 'DllMap' style support at CoreCLR release, that it be stated here succinctly.

Another way to express it is: If there is a Pull Request that adds "DllMap" functionality, would it be accepted?

That's a question for @jkotas :smile:

Many developers do see .NET Core as a replacement for Mono, this excludes the use of ASP.net. CoreCLR itself is very attractive on platforms other than Windows. I think the .NET developers need to realize this more and be more open about its implications or it may result in fragmented solutions to problems that we're not (and could have been) tackled by the team.

When I said it's not a replacement, I mean it's not a drop in replacement. Things do work differently (there's no GAC etc). Mono is the basically a full CLR implementation, CoreCLR is much more bare bones, so a lot of the policy management you're used to just isn't there (no System.Configuration) and I think that's completely by design.

I just thought, why not add additional parameters to DllImport? For example:
[DllImport("foolib", ModuleLinux="foolib.so", ModuleMacOSX="foolib.dylib", ModuleWin="foolib.dll" ModuleFreeBSD="foolib"]

If the ModuleLinux attribute is not set for the runtime on linux, it instead binds using the first parameter. Same with the other platforms.

Midpoint solution that does not require the use of .config files.

This could indeed also be added to the .NET Framework and Mono.

If this is unacceptable and a .config file won't be added to CoreCLR, what other possible solution is there that doesn't involve modifying how CoreCLR works or overriding code. If it's the teams decision too offload such functionality to the user then so be it. What is the end decision?

The expanded form of DllImport still places the burden of platform knowledge on the library author. Additionally, it causes a runtime update for all platforms when a new platform is supported. "ModuleXYZ" would need to be added to the metadata.

An alternative on platforms that support it, like Linux, is using a symlink: msvcrt.dll -> libc.so.6. This does not account for situations where most functions are located in the same library but a single function may be in a different shared library with a different name. The alternative for that would be to compile a shared library wrapper that exports all the symbols and internally calls dlopen to pass through each function to the appropriate library. This significantly raises the barrier to entry for cross platform support.

My interpretation of the argument against implementing DllMap is that it could adversely affect DNX. It is also my understanding that DNX does not map DLL names and only uses the hook to change the location where a library is loaded from. It would seem that dotnet/coreclr#937 is designed to address this.

Ironically, the functionality provided by DllMap also fixes DNX since it will not work on platforms where the native library is not what the library author specified.

I should also state that this not a matter of preference such as policy. Its a matter of works or crashes. For a given platform and library there is one correct DllMap. The goal is to convey this information to the native dll loader. Its also important to remember this is not a Unix/Other problem since a library may specify a linux, bsd, osx library name for DllImport and still be considered correct.

Its a matter of works or crashes. For a given platform and library there is one correct DllMap.

@OtherCrashOverride isn't that possible with the host API?

As an alternative to the file-based dllmap, we could also encode it into the assembly that p/invokes directly via attributes (and using the new OSName API from https://github.com/dotnet/corefx/pull/1494):

[assembly: DllMap("foo", "foo.dll", OSName.Windows)]
[assembly: DllMap("foo", "libfoo.dylib", OSName.OSX)]
[assembly: DllMap(Dll="foo", Target="libfoo.so", OS=OSName.Linux)]
[assembly: DllMap(Dll="dl", Name="dlclose", Target="libc.so", OS="FreeBSD")]

The benefit of this approach is that it's much easier to implement in the runtime (no weird XML parsing) and still provides the familiar mapping logic (I think we could implement everything that Mono's dllmap currently supports via those attributes, and Mono could eventually include it as well).

The library author is in full control instead of needing to rely on the host to do the right thing. The only downside is that the library consumer can't modify the mapping should a new platform arrive, something which I personally could live with (by hoping that libraries that get ported to CoreCLR receive frequent updates due to community participation).

Thoughts?

Like it

isn't that possible with the host API?

It certainly is possible, but it requires every host to implement the knowledge of every DllMap that exists today or will exist in the future to work deterministically and consistently.

we could also encode it into the assembly that p/invokes directly via attributes

I considered scenarios involving injecting information into assemblies, they all have the same problem: The digital signature of the assembly is altered. You have no way to verify its authenticity.

What we are talking about with DllMap is a file that may optionally exist:

MyFoo.dll.config

This file contains the end user modifiable mapping information for multiple platforms. When a new platform comes along, you can add it to the contents of the file or leave it as is if you are not affected by it.

Library authors can include this file as part of their NuGet package. Its presence is benign in the case that the authored DLL name matches on your platform. If the runtime can not find a match, then it consults this file for alternative names, not paths. It would even be possible to replace the current API (AssemblyLoadContext.LoadUnmanagedDll) of a single file name with a proper class containing a dictionary of mappings for platforms available. In this case the host can now make informed decisions about loading the library while maintaining absolute control. The case where a host does not hook this call, should default to the runtime loading the corrected name.

Maybe its the name that is bad. How about if we call it:

MyFoo.dll.map

@OtherCrashOverride

I considered scenarios involving injecting information into assemblies, they all have the same problem: The digital signature of the assembly is altered. You have no way to verify its authenticity.

Just to make this clear: the library author explicitly adds these attributes to their assemblies, so they would be included when the signature is applied.

Maybe its the name that is bad [...]

Yes, had the same idea. This doesn't solve the much bigger issue of implementing XML parsing in the runtime though.

the library author explicitly adds these attributes

The issue remains that it places the burdon of platform knowledge on the author. A windows author should not have to know OSX to release a library and vice versa.

This doesn't solve the much bigger issue of implementing XML parsing

It does not have to be XML. It can be a binary format as long as a tool is provided that allows end users and integrators to author and modify the file. Additionally, the expectation is that DllMap functionality for reading the config file will take place in managed code where System.XML, etc is available.

Additionally, the expectation is that DllMap functionality for reading the config file will take place in managed code where System.XML, etc is available.

System.Xml isn't available and is completely optional so that code would be to be implemented in native code. The CoreCLR has System.Runtime(the new mscorlib), that's about it.

Couldn't we just compile something to a binary format using the constructs and metadata the runtime already understands?

the example was

[assembly: DllMap("foo", "foo.dll", OSName.Windows)]
[assembly: DllMap("foo", "libfoo.dylib", OSName.OSX)]
[assembly: DllMap(Dll="foo", Target="libfoo.so", OS=OSName.Linux)]
[assembly: DllMap(Dll="dl", Name="dlclose", Target="libc.so", OS="FreeBSD")]

Can we make that compilable and stand alone so that MyFoo.dll.map is actually an assembly?

@akoeplinger I like your example, I think its a good solution. Better than mine (where you had to type more for each DllImport if you wanted to import more functions).

I would be fine with an offline tool that takes a XML, JSON, text file as input and compiles a .dll.map file as output.

@OtherCrashOverride Are you concerned about being unable to modify the DllMap of closed-source libraries? As @akoeplinger said it seems to be the only downside into making it a compile-time thing. Someone will probably make a tool to modify and recompile DllMap attributes of an assembly anyway as long as the assembly is not strongly signed if its that important.

Not once have I had the need to modify a dllmap in a .config file for a library or executable that was not mine, it is usually something the library creator should be worrying about - that's my experience at-least, others may be different.

Are you concerned about being unable to modify the DllMap of closed-source libraries?

The concern is that the author of libraries and the author of hosts may not take into consideration the platform I am running on when authoring. The concern is that also as an author, I should not have to worry about every platform out there. This is not speculation about what may happen. Its lessons learned from Mono.

DllMap (or similar) is functionality I require. As such, it is functionality that I will add if there is no official support. I am not begging for feature inclusion. I am attempting to negotiate so that I do not have to create an incompatible fork of the project before its even released. The ideas and information presented here are very helpful for guiding the direction my own solution will take.

CoreCLR chose to become a member of the multi-platform community. This issue is an example of the responsibility of that choice.

To illustrate the point better, lets go back to the example:

[assembly: DllMap(Dll="foo", Target="libfoo.so", OS=OSName.Linux)]
[assembly: DllMap(Dll="dl", Name="dlclose", Target="libc.so", OS="FreeBSD")]

What happened to FreeBSD there? What happens when the next *BSD is brought up? Who is in a better position to map the dll? The library author or host author that had no idea the platform even existed? Or a third party with platform knowledge that can create a MyFoo.dll.map file from XML?

Unfortunately that's only true for a small subset of low level things that the core clr chooses to abstract. If your library decides to call into a random native library it's absolutely up to the library to get it right, not the host. Other platforms handle this differently and it might also make sense for the sake of completeness to look at those.

Other platforms handle this differently and it might also make sense for the sake of completeness to look at those.

We should definitely look at all the options available! However, someone is going to have be explicit about what those are so we can discuss them.

If the end solution is a DllImport replacement, then lets explore that.

Any author of a managed assembly that uses un-managed native libraries will probably be aware of what OS's/platforms said used native library is supported on anyway. There may be cases where a managed library could support another platform if the referenced native library exists on it, but how often do new platforms appear in the computing world that require such an action? Not to mention it would be limited by the defined set of platforms that are supported by CoreCLR. Was it really a big issue for mono?

If a developer is targeting CoreCLR I'm sure they will be trying to add DllMap's to all the platforms supported by CoreCLR.

Here is another real world example:
https://github.com/mono/opentk/blob/master/Source/OpenTK/OpenTK.dll.config

<configuration>
  <dllmap os="linux" dll="opengl32.dll" target="libGL.so.1"/>
  <dllmap os="linux" dll="glu32.dll" target="libGLU.so.1"/>
  <dllmap os="linux" dll="openal32.dll" target="libopenal.so.1"/>
  <dllmap os="linux" dll="alut.dll" target="libalut.so.0"/>
  <dllmap os="linux" dll="opencl.dll" target="libOpenCL.so"/>
  <dllmap os="linux" dll="libX11" target="libX11.so.6"/>
  <dllmap os="osx" dll="openal32.dll" target="/System/Library/Frameworks/OpenAL.framework/OpenAL" />
  <dllmap os="osx" dll="alut.dll" target="/System/Library/Frameworks/OpenAL.framework/OpenAL" />
  <dllmap os="osx" dll="libGLES.dll" target="/System/Library/Frameworks/OpenGLES.framework/OpenGLES" />
  <dllmap os="osx" dll="libGLESv2.dll" target="/System/Library/Frameworks/OpenGLES.framework/OpenGLES" />
  <dllmap os="osx" dll="opencl.dll" target="/System/Library/Frameworks/OpenCL.framework/OpenCL"/>
</configuration>

This one is interesting too:
https://github.com/mispy/FNA/blob/master/FNA.dll.config

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <dllmap dll="SDL2.dll" os="windows" target="SDL2.dll"/>
    <dllmap dll="SDL2.dll" os="osx" target="libSDL2-2.0.0.dylib"/>
    <dllmap dll="SDL2.dll" os="linux" target="libSDL2-2.0.so.0"/>

    <dllmap dll="SDL2_image.dll" os="windows" target="SDL2_image.dll"/>
    <dllmap dll="SDL2_image.dll" os="osx" target="libSDL2_image-2.0.0.dylib"/>
    <dllmap dll="SDL2_image.dll" os="linux" target="libSDL2_image-2.0.so.0"/>

    <dllmap dll="soft_oal.dll" os="windows" target="soft_oal.dll"/>
    <dllmap dll="soft_oal.dll" os="osx" target="libopenal.1.dylib"/>
    <dllmap dll="soft_oal.dll" os="linux" target="libopenal.so.1"/>

    <dllmap dll="libvorbisfile.dll" os="windows" target="libvorbisfile.dll"/>
    <dllmap dll="libvorbisfile.dll" os="osx" target="libvorbisfile.3.dylib"/>
    <dllmap dll="libvorbisfile.dll" os="linux" target="libvorbisfile.so.3"/>

    <dllmap dll="libtheoraplay.dll" os="windows" target="libtheoraplay.dll"/>
    <dllmap dll="libtheoraplay.dll" os="osx" target="libtheoraplay.dylib"/>
    <dllmap dll="libtheoraplay.dll" os="linux" target="libtheoraplay.so"/>
</configuration>

how often do new platforms appear in the computing world that require such an action?

Pay attention to "soft_oal.dll". How would you map it? Anyone with platform knowledge can tell you it is an alternative name for "openal32.dll" on Windows. Can anyone tell me what it maps to on iOS?
Of course you can't because CoreCLR does not even know iOS is a platform. What about Android? Did the author of the library or host take that platform into account?

Actually, .NET Framework does not even support dllmap on windows. So it would use whatever it supplied for the DllImport attribute and ignore the dllmap in .config. the soft_oal.dll/openal32.dll problem was easily solved for me by renaming those native libraries to the right one. (Many people rename soft_oal.dll to openal32.dll since many applications still expect that name).

I don't think FNA even supports Android, so why worry about it? OpenTK's base does not support Android, Xamarin's OpenTK fork however does support Android and never do you need to edit .config files to get things working on there.

I can see the point your trying to make but I still think if a library author intends to support a platform they will add DllMap's accordingly to the best names possible. soft_oal.dll and openal32.dll are both acceptable.

The OSName parameter of the proposed solution could have additional names/enum values assigned to it to work with platforms outside CoreCLR. In which case the CoreCLR would ignore any of those mappings while other runtimes might use them. Such functionality would also be added to .NET Framework and Mono.

I also think this problem is primarily limited to closed-sourced third-party assemblies, both FNA and OpenTK could have the code edited accordingly by the user if a dllmap name is bad, in which case an issue should be brought up about it and hopefully renamed to fix it in the main repository.

Actually, .NET Framework does not even support dllmap on windows.

This is the main fallacy of thinking here. When .Net was "Windows only" there was a implied agreement that you authored things for Windows and then made them work on Mono. That assumption is no longer true. A library may originate on OSX or Linux and now not work on Windows without DllMap functionality.

I can see the point your trying to make but I still think if a library author intends to support a platform they will add DllMap's accordingly to the best names possible.

The point is there is a lot of domain specific knowledge that a host author may not be aware of and that a library author may not care about. This is about xcopy deployment that allows the same binary to run on multiple platforms with minimal effort: DllMap provides that functionality today. We are trying to find an alternative that is a better fit for CoreCLR. Excluding this functionality is not an option.

I should probably also be explicit about this:
1) the goal is not to copy memory as in the msvcrt.dll example
2) the goal is not to make FNA, OpenTK work.

Alternatives and/or excuses to these are not what is being sought here.

Nuget is the default distribution mechanism for .NET Core libraries. Nuget has support for distribution of platform-specific implementations of the same library. It is used pretty often today, e.g. to provide common APIs across devices. Platform-specific versions of https://github.com/dotnet/corefx are going to use the same Nuget distribution mechanism.

Is there anything really missing in the existing constructs - regular DllImport, conditional compilation, and distribution of platform-specific libraries via Nuget - to address the problem of authoring platform specific libraries for .NET Core. What prevents FNA or OpenTk for .NET Core to be on this plan?

Hacking the existing binaries to make them run in places that they were not designed for is a different case. It depends on luck whether the result is going to work 100%. I do not think that the runtime should have a built-in support for it. In cases where it needs to be done as a shortcut to make something work, I think that it should be best addressed by IL rewriting tools, e.g. using Cecil. I would not recommend it as a production solution.

@jkotas IL has the benefit of being platform-agnostic. There are many native libraries (open source or not) that provide the same API across platforms. A embedded DllMap assembly setting would allow developers to create cross-platform, compile once libraries/assemblies, as they have been doing with .NET Framework/Mono. If .NET Core looks at this a different way (i.e. there should be a separate assembly for each platform) then there will be additional overhead.

Multiple DllImport functions will need to be defined for each support platform, and if the developer decides not to compile separate managed assemblies for each platform run-time branching will occur in order to access the correct native function for that platform.

Using Nuget will help with the distribution problem if that's the case, but it still means we need to define multiple of the same DllImport with only the module name slightly changed based on the platform (foo.dll, foo.dylib, foo.so).

A embedded DllMap assembly setting would allow developers to create cross-platform, compile once libraries/assemblies, as they have been doing with .NET Framework/Mono.

Very well said. That is indeed the need for this feature.

Is there anything really missing in the existing constructs - regular DllImport, conditional compilation, and distribution of platform-specific libraries via Nuget - to address the problem of authoring platform specific libraries for .NET Core.

The main difference here is that we are not talking about distributing native binaries. We are talking about wrappers for existing native binaries. IF-DEF driven development puts us in the situation where the wrapper needs to be re-distributed for every platform. If I xcopy deploy, MyFoo.dll (managed wrapper) may be for Linux but my deploy target is Windows or OSX so It crashes with a DllNotFoundException. This is very different from the CoreCLR libraries where the entire OS API is different from platform to platform and an abstraction makes more sense. OpenCL is the same API on every platform, but the name of the library is not. Its counter productive to require everyone to recompile (so the #ifdef changes the library name) for each target platform.

We should stick to the simplest form of this: Compile once, run anywhere.

@xanather Appending platform specific suffix (foo.dll, foo.dylib, foo.so) is not a problem. It is done in CoreCLR already. The framework takes advantage of it. Check e.g. https://github.com/dotnet/corefx/blob/master/src/Common/src/Interop/Unix/Interop.Libraries.cs. This discussion has been about the complex cases that require custom platform-specific PInvoke rewriting.

@OtherCrashOverride xcopy deployment of self-contained apps from one platform to a different platform is not what .NET Core is designed for. The framework is app-local in .NET Core. It means that the raw app binaries are platform-specific because the partially platform-specific framework is included.

You will be able to xcopy deploy the project.json app, and then let dnx to take care of loading the right platform-specific parts as necessary.

@jkotas, I can't believe I did not notice that. Does it automatically check other directories too based on the platform? I.e. if specifying DllImport["OpenAL"] in Mac OS X it will look for /System/Library/Frameworks/OpenAL.framework/OpenAL? There are some scenarios where the file name/location of the same native library on different platforms could be very different, even though the API's are the same.

Edit:
I see that
https://github.com/dotnet/corefx/blob/master/src/Common/src/Interop/OSX/Interop.Libraries.cs
Has to explicitly type out the full name for an OS X library, so consuming OpenAL on multiple platforms using a singular DllImport is not possible in the current state? (could be wrong).

xcopy deployment of self-contained apps from one platform to a different platform is not what .NET Core is designed for.

If DllMap functionality is deemed to not be a good fit for CoreCLR, that is fine. As stated before, the world is not going end. What is requested, as mentioned earlier, is an official statement to that effect:
"DllMap functionality will not be supported and pull requests for it will not be accepted." Then everyone can then go on with their lives. ;-)

I'm starting to lose track of the discussion. I think the main reason is that you guys gave several good examples of things that should work or that caused issues in the past, but all those are now hidden in several pages of text. Would it be possible to summarize that and keep it up-to-date somehow?

The more I think about it, the more I like the .dll.map as an assembly. Not only does it make it unambiguously parsable to the runtime, it can also be digitally signed.

The key seems to be libraries that aren't xcopy deployed with the application but are instead made available on the "system". I think it's important for library authors to do "something" to improve their lives (dllmap/api something) to load the right native lib at the right time from the right place. DllImport is quite limited in this case

C# has the concept of "Extension Methods", what if we look at this as a concept of "Extension Assemblies"? A way to decorate an already existing assembly (with another assembly).

MyFoo.dll
(Managed wrapper assembly)

MyFoo.dll.[platform_name].dll

MyFoo.dll.Linux.dll
[assembly: DllMap(Dll="foo", Target="libfoo.so")]

MyFoo.dll.FreeBSD.dll
[assembly: DllMap(Dll="foo", Target="libfoo.so")]
[assembly: DllMap(Dll="foo", EntryPoint="bar", Target="libother.so", TargetEntryPoint="foobar")]

MyFoo.dll.Windows.dll
[assembly: DllMap(Dll="foo", Target="foo.dll")]

This details needs to be better thought out, but as a rough starting point it solves a lot of problems.

@jkotas Appending .so/.dylib/.dll is only a small part of the game as the name may be completely different (see my example about libssl.so/dylib vs libeay32.dll). Of course one way to solve this is to conditionally compile the name and the DllImports and produce one assembly per platform, but as I said I think this is a major organizational overhead for library authors. @davidfowl's point is also a good one.

Built-in helpers to allow a single assembly to work an multiple platforms via e.g. simple DllMap attributes seems like a big win for very little work to me.

@Tragetaschen The problem description in my original post is still accurate I think.

@akoeplinger I think your proposed solution is the best one. It would help writing cross-platform assemblies that consume cross-platform native libraries big time - which I think is the main reason why this issue exists.

For those that can't find it, akoeplinger posted this:

C# [assembly: DllMap("foo", "foo.dll", OSName.Windows)] [assembly: DllMap("foo", "libfoo.dylib", OSName.OSX)] [assembly: DllMap(Dll="foo", Target="libfoo.so", OS=OSName.Linux)] [assembly: DllMap(Dll="dl", Name="dlclose", Target="libc.so", OS="FreeBSD")]

@xanather, side-note regarding dllimport and renaming of binaries (I am not completely sure if it applies to P/Invoke and coreclr runtimes in general):

Windows presents some limitations with dllimport, which can be avoided by using 'delay load' approach. On compile, VC linker may throw LNK1194, if data is 'imported' from the executing binary by the 'imported binary'. In fact, there is probably no 'decent' way to fix this. Presumably, vtable needs to be somehow adjusted (can 'adjustor thunks' be modified to rename dllimport path?) and this dry article might have some pointers: http://www.cultdeadcow.com/tools/pewrap.html.

Case in hand, this pull request https://github.com/iojs/io.js/pull/1251 and many related bug reports in there issue tracker for: _renaming node.exe to random_name.exe or iojs.exe to myFavoriteTool.exe fail to load npm packages with C++ modules_.

On *nix and *BSD, there is no such issue and limited workarounds like delay loading are not required.

That reminds me of a related issue. When attempting to load a DLL that fails to load due to missing DLL dependencies, the exception thrown is DllNotFoundException. This causes lots of confusion with Mono's DllMap because its implied the DLL name (that was mapped) was not found when in actuality there was an error loading the DLL due to a dependency.

This brings to light the question: do we need a DllLoadException to compliment DllNotFoundException?

It is a pretty bad idea to embed the mappings on the binary. It might work for a handful of cases, but in real life, system libraries, versions and the spread of different versions is a problem that end users face.

This is really a convenient method for people deploying an app to adjust it to the idioms on the host operating system, something that usually the original developer did not keep in mind.

I must say I agree with @migueldeicaza here. I'm not saying using the mapping system mono uses is the way forward, but I do think the mappings should be allowed to exist outside of the assembly itself. Also, @davidfowl, you told me yesterday that there is work being done on how to deal with native assemblies in nuget packages, what if the mapping could live in nuget packages that you get the native images from? That way (at least for Core CLR) you wouldn't have to hack the PATH env variable.

what if the mapping could live in nuget packages that you get the native images from? That way (at least for Core CLR) you wouldn't have to hack the PATH env variable.

@Alxandr read the whole thread again :smile: .

Since it keeps getting brought up, I thought I should reiterate that NuGet is not an option for solving this problem. While CoreCLR development seems to currently be laser focused on ASP.Net, this issue is representative of the fact that not all consumers of CoreCLR will be using it for ASP.Net. NuGet is an extremely poisonous dependency for them: It is neither wanted nor allowed.

Well if it is a bad idea to be integrated with the assembly itself and an external old school .config file is ruled out then the only other options that I see is:

  1. Add something just like .config but much more lightweight and is not implicitly loaded. Use XML parsing for it. A simple plain XML file which can be named whatever the programmer decides (.config could still be used, or .xml). Make it possible to call a method of the running domain to explicitly load such a file. Can be much more lightweight than a System.Configuration implementation and will only be used for dllmapping. This does mean CoreCLR will need to be able to parse XML files.
  2. Do what OtherCrashOverride mentioned and use satellite assembly to define mappings.
  3. Use some other type of file format (json?).
  4. Ditch the idea altogether which may cause unnecessary forks of CoreCLR and possible multiple-third-party implementations which try to emulate Mono's dllmap (this is guaranteed to happen, and already is, as CoreCLR is used more outside ASP.NET/NuGet).

Personally I think 1. is the best one, though I thought it was already ruled out.

We already opened Pandora's Box on this one. Currently, I plan to go with the satellite assembly approach for the reason that it can be digitally signed without introducing anything new to the runtime. Additionally, I plan to craft a command line tool to produce the assembly from Mono's XML format as an input.

Its worth mentioning that this is not without precedence in CoreCLR. Native images also use a similar technique: mscorlib.ni.dll

We should look at possibly aligning with the work being done there. (#1034, dotnet/coreclr#1035)

@OtherCrashOverride You keep calling it a "burden," but I don't see what's so burdensome about requiring the developer of a library to _know the names of his dependencies_. He's going to have to know that information anyway in order to build and test on the multiple platforms he supports, so that's not a problem.

But what about other platforms he doesn't support? That seems to be your major worry, and the answer is right there as part of the question: he doesn't support them. It's really that simple, and saying "but cross-platform!" doesn't change that.

I know if I built something that's supposed to run on Windows and Android, and someone started submitting bug reports against OSX, I'd close them as Not Supported because I don't have a Mac. I'd tell the bug report author that he knows where the repo is, and if he wants to make my code work on a Mac, he's free to _make my code work_ on a Mac, but that's not my problem. And once someone goes and makes the code work, they can contribute patches back to be integrated into the core library, and then the new library names for the new platform end up compiled into the source again and everyone's happy. So what's the problem?

So what's the problem?

This discussion has done a very good job of describing the problem. See the numerous examples such as 'memcpy' that are impossible for even a "burdened" developer that "knows the name of his dependencies"

the answer is right there as part of the question: he doesn't support them.

he's free to make my code work on a Mac, but that's not my problem.

This perfectly illustrates why this feature proposal is mandatory.

@OtherCrashOverride I read through the entire discussion before I replied, and I didn't see a single example of anything impossible.

If I'm using memcpy, I put in the names for the platforms I support. If the name of one of those libraries changes in a future version, then _they're not the same library anymore, and that's not supported._ Or are you honestly suggesting that your "compile once, run anywhere" assembly should seamlessly transition from the native libFoo v. X to libFoo v. Y with zero outside intervention? Because anyone who's ever actually worked extensively with native code can tell you that just won't happen; it's hard enough to keep compatibility between versions when you have self-describing assemblies with rich metadata to help the linker out!

So once we throw out silly straw cases like that, I don't see any impossibilities relating to not being able to know the future; you don't need to and you don't want to. Let the next version worry about what the future holds.

the answer is right there as part of the question: he doesn't support them.

he's free to make my code work on a Mac, but that's not my problem.

This perfectly illustrates why this feature proposal is mandatory.

I must be missing something, then, because the point I was making is that this perfectly illustrates why this is _not_ necessary. Mind filling in a few links in the chain of reasoning?

I didn't see a single example of anything impossible.

Currently we have DllImport that allows specifying a single library name. The C runtime has different names on different platforms. Hence, it is currently impossible to use "memcpy" cross-platform.

Or are you honestly suggesting that your "compile once, run anywhere" assembly should seamlessly transition from the native libFoo v. X to libFoo v. Y with zero outside intervention?

While not the intent of this proposal, it is certainly possible as "memcpy" has the same API signature across several version of the MS VC runtime as well as the Linux libc.so.X.

Because anyone who's ever actually worked extensively with native code can tell you that just won't happen

It is my belief that myself and others (such as @migueldeicaza who commented earlier) have worked with native code and are sufficiently competent and qualified enough to speak on the matter.

Mind filling in a few links in the chain of reasoning?

"he doesn't support them." and "but that's not my problem." are exactly why this feature is needed. Those who do support a platform and have the problem are exactly the ones that need a simple way to resolve it. Creating a GitHub account, cloning a repo, creating a branch, submitting a patch that will never be accepted upstream (because DllImport only allows one name), maintaining the patch across versions of upstream, publishing and maintaining a binary with the exact same name as upstream, etc is cumbersome. Its much easier to add a single "DllMap" file.

TL;DR - I don't care that you don't care about my platform or use of CoreCLR. You should not have to. This proposal saves you from that "burdon". #YourWelcome

[edit: I left it as flamebait. "Your" is indeed the improper spelling]

Creating a GitHub account, cloning a repo, creating a branch, submitting a patch that will never be accepted upstream (because DllImport only allows one name), maintaining the patch across versions of upstream, publishing and maintaining a binary with the exact same name as upstream, etc is cumbersome.

Ah, I see. You think that because I don't believe in doing _your_ thing, that I think the appropriate solution is to do _nothing_? This is not correct; the status quo is obviously inadequate, but before you reprimand others about not reading the whole thread, you really ought to look at the proposal for a hypothetical [assembly: DllMap()] attribute. That would do a much better job of resolving this, because it would help resolve a more important underlying problem too: making sure it actually works.

Do you know what happens if I write and publish libAwesome for Windows, and then Bob McCodeMonkey repackages it with a patch to make it work on the Mac, and then due to subtle platform-level differences it doesn't actually _work_ on the Mac?

What happens is I end up getting dozens of Mac bug reports from clueless third parties who don't realize I had nothing to do with the process of making it work on the Mac, and Bob ends up getting dozens of bug reports about core functionality from Mac users even though the underlying problem was in my codebase, which I _don't_ get. Wires get crossed all over the place and productivity plummets. External patching is ideally not something we want to enable at all; _enshrining it as a core feature_ is sheer insanity!

The proper way to handle this is to make Bob go to the work of creating and submitting a patch (which would be accepted and merged because a DllMap attribute allows multiple resolutions to exist side by side) and then the new version with Mac support gets published. When Mac-specific bug reports come in, I assign them to Bob, who is officially on the radar now because he had to work with me to get his patch integrated, and when general-level bugs are found from Mac users, I get them and am able to fix them. Win/win.

before you reprimand others about not reading the whole thread, you really ought to look at the proposal for a hypothetical [assembly: DllMap()] attribute.

The conversation then continued to evolve and postulated allowing the attribute to exist in external "satellite" assemblies. We do not currently have the DllMap attribute. It is an artifact of this proposal.

What happens is I end up getting dozens of Mac bug reports from clueless third parties who don't realize I had nothing to do with the process of making it work on the Mac

The previously mentioned digital signature aspect of the proposal covers this case.

The proper way to handle this is to make Bob go to the work of creating and submitting a patch

As noted earlier in the thread, the scenarios include the case where the library is provided by a 3rd party vendor and source code is not available (and dis-assembly or modification of the binary is a violation of the EULA). This is a real world concern, not a hypothetical scenario. CoreCLR is not in a position to dictate dogma to businesses.

External patching is ideally not something we want to enable at all; enshrining it as a core feature is sheer insanity!

This is not a proposal for patching. It is a proposal to establish a standard method of communicating additional information to the runtime to assist the dynamic linker. It is not altering code or P/Invoke signatures. Also as mentioned earlier, it is not without precedence: DNX already provides assistance to the dynamic linker and "satellite" files are used to load compiled native images.

Anecdotally, Mono has implemented a DllMap solution for many years. Chaos did not ensue plunging the world of open source into darkness. So the example of "Bob McCodeMonkey" given earlier may just be hyperbole.

Is this still happening? If there is to much friction to add a attribute based DllMap feature then how about what jkotas said a while back but something more plugin-like and friendly.

jkotas commented on 6 May:

Overridable AssemblyLoadContext.LoadUnmanagedDll method was designed to allow customization of the DllImport target loading. The method gets the dllname and returns the OS handle. The user supplied algorithm can probe for .so versions, use dllmap-style binding, or do something clever like dnx ;-)

It would be nice to publish sample that implements dllmap using AssemblyLoadContext.

Right now I have no idea how to override AssemblyLoadContext correctly and I feel users of .NET Core should not have to do such a thing. Tutorials/examples would be appreciated.

What about exposing methods on an AppContext that allows you to set a custom callback/delegate which deals with loading native libraries based on the requested name? Returning IntPtr.Zero will force it to try load it the default/CLR defined way. Otherwise examples showing how to override AssemblyLoadContext method should suffice.

Never mind, its easier than I thought. For those that don't know yet, just pass an instance of a derived AssemblyLoadContext to https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Runtime/Loader/AssemblyLoadContext.cs#L247.

I think https://github.com/dotnet/coreclr/issues/937 should be addressed before usage of AssemblyLoadContext could be a proper solution to this issue.

Also binding a custom AssemblyLoadContext doesn't seem to be actually possible at the moment with DNX (or I am just using it wrong: https://github.com/dotnet/coreclr/issues/1119 - currently no docs/examples on usage of this - its seems straightforward on the API so maybe it is broken).

Copy/Paste of the relevant part of a discussion from dotnet/coreclr#1257

I have no issue with a policy that prevents a DllImport from happening unless it has been explicitly 'mapped' for the target platform. However, we need dotnet/coreclr#930 before that can happen. I imagine this where DllImport (or its replacement) holds a token that represents a native library. There then is some explicit way to state, outside of the assembly, what library name to map it to.

[Edit]
Kind of like how strings are internationalized.

Looking further into dll maps as analogous to internationalization look promising. The string for the full library named is stored in resource file (resx) and instead of a culture, a platform identifier is used. The resource files can be embedded in the assembly for platforms the author knows, and they can be satellite for platforms the author did not include.

Again, external patching through "satellites" is a bad idea, because it encourages end-user confusion about who supports what. Simply because you are not aware of instances of it having happened in Mono--a system that, frankly, never got a very large market share because both the .NET community in general and the Open Source/FLOSS community in general saw it as an "outsider," for their own different reasons--doesn't mean that it won't become a real problem when cross-platform CoreCLR, backed by Microsoft, becomes "official."

Comparing it to string internationalization is misleading, because hard-coded strings do not change the behavior of a program, (unless of course they're configuration, scripts, or similar, which generally don't get internationalized,) but different versions of external libraries do.

We could add a flag as a parameter to the new attribute that the library author could specify to forbid using external resources for resolution. However, it starts to sound a bit like Digital Rights Management at that point.

At the risk of sounding impudent, I don't think Microsoft actually cares or even wants this feature. So its likely we will end up with everyone rolling their own solution. In that event, anybody can allow or disallow whatever features they want.

Wait, you don't think Microsoft wants _what_, specifically? Because some authoritative way to resolve cross-platform imports is obviously needed for cross-platform, and if they didn't want cross-platform, we wouldn't see all the other supported platforms prominently displayed on the main page of the project...

They (Microsoft) have the cross-platform solution they want in dotnet/coreclr#1257.

...which they appear to be reconsidering now that the security implications of that "solution" have been pointed out.

@OtherCrashOverride I haven't seen anyone from our company say that the dotnet/coreclr#1257 is our final solution. From what do you deduce it is that way? But it doesn't seem we have settled on a reasonable final solution yet and we need something in the meanwhile until we get there.

From what do you deduce it is that way?

@janvorli
Mainly from neglect. This issue was opened May 5. On May 9 in this issue a response from Microsoft was requested. Its now July 17. During that time I would have expected someone to say "we are looking into this" if that were the case.

My intent is not to be antagonistic. However, people are trying to run businesses and need to put plans into motion. The importance of that statement is the following: None of this is personal.

we need something in the meanwhile until we get there.

I acknowledged that in the other issue. I am also supportive of that in the other issue.

The goal is to work together to get something done on this. However, its been a very one sided conversation.

I started prototyping this. It can be done without any changes to the runtime. It just becomes more verbose than DllImport. "NativeLibrary" is the class that handles everything transparently. Its an abstract base class with static methods. New platforms just subclass it and everything works (at least in theory, this is still just a prototype with PosixNativeLibrary and Win32NativeLibrary).

    class CRuntimeInterop
    {
        static string LibName = "libc.so";


        static CRuntimeInterop()
        {
            // In the future, this will be read automatically from somewhere
            NativeLibrary.ProcessMapping(PlatformID.Win32NT, LibName, "memcpy", "msvcrt.dll", "memcpy");
        }


        #region memcpy

        // Calling convention and [MarshalAs] from DllImport go here
        [UnmanagedFunctionPointer(CallingConvention.Cdecl)]
        delegate IntPtr memcpy_prototype(IntPtr dest, IntPtr source, IntPtr count);

        // Keep the delegate to speed up subsequent calls
        static memcpy_prototype memcpy_delegate;

        // Expose the native function
        public static IntPtr Memcpy(IntPtr dest, IntPtr source, IntPtr count)
        {
            if (memcpy_delegate == null)
            {
                // Demand load the delegate
                memcpy_delegate = NativeLibrary.Map<memcpy_prototype>(LibName, "memcpy");
            }

            return memcpy_delegate(dest, source, count);
        }

        #endregion
    }

Currently it uses DllImport to call the OS specific library routines. This can be changed to call the PAL instead eliminating the need to subclass.

I should also note that a requirement is that it be AOT compatible. So I have avoided doing IL patching or on-the-fly generation from stubs.

Sacrificing the demand loading and adding a custom attribute yields a more compact representation.

    class CRuntimeInterop
    {
        const string LibName = "libc.so";


        static CRuntimeInterop()
        {
            // In the future, this will be read automatically from somewhere
            NativeLibrary.ProcessMapping(PlatformID.Win32NT, LibName, "memcpy", "msvcrt.dll", "memcpy");
            NativeLibrary.Bind(typeof(CRuntimeInterop));
        }


        #region memcpy

        // Calling convention and [MarshalAs] from DllImport go here
        [UnmanagedFunctionPointer(CallingConvention.Cdecl)]
        public delegate IntPtr memcpy_prototype(IntPtr dest, IntPtr source, IntPtr count);

        [NativeLibrary(LibName, "memcpy")]
        public static readonly memcpy_prototype Memcpy;

        #endregion
    }

Calling it is no different than calling DllImport function.

CRuntimeInterop.Memcpy(handleDest.AddrOfPinnedObject(), handleSource.AddrOfPinnedObject(), (IntPtr)source.Length);

This approach yields the fastest method available to P/Invoke. The sacrifice is the additional binding time at static cctor(). Additionally, if an entrypoint is not found, the type initializer fails.

As a bonus, this is completely 100% compatible with full .Net and Mono.

Here it is refactored a bit and mappings moved to an attribute:

    class CRuntimeInterop
    {
        const string UnixLibName = "libc.so";
        const string WindowsLibName = "msvcrt.dll";


        static CRuntimeInterop()
        {
            NativeLibrary.Bind(typeof(CRuntimeInterop));
        }


        #region memcpy

        // Calling convention and [MarshalAs] from DllImport go here
        [UnmanagedFunctionPointer(CallingConvention.Cdecl)]
        public delegate IntPtr memcpy_prototype(IntPtr dest, IntPtr source, IntPtr count);

        [NativeLibraryBinding(PlatformID.Unix, UnixLibName, "memcpy")]
        [NativeLibraryBinding(PlatformID.Win32NT, WindowsLibName, "memcpy")]
        public static readonly memcpy_prototype Memcpy;

        #endregion
    }

Appears that delegates wont take "__arglist" so this method loses C vaargs support.
(unless there is a clever hack)

I added external mapping support as a AssemblyName.NativeBindings.xml file. An unexpected change from DllMap is that since there is no longer a single mapping, you can not map by native library name. Instead the mapping is done on type Type and Field names.

<?xml version="1.0" encoding="utf-8" ?>
<NativeBindings>
  <CRuntimeInterop>
    <Printf platform="Win32NT" libraryName="msvcrt.dll" entryPoint="printf"/>
  </CRuntimeInterop>
</NativeBindings>

With the mapping now done on field name, the entrypoint name can become optional as it is with DllImport. The following illustrates converting DllImport to the new format:

        //[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
        //public static extern IntPtr memset(IntPtr dest, int c, int count);

        [UnmanagedFunctionPointer(CallingConvention.Cdecl, SetLastError = false)]
        public delegate IntPtr memsetDelegate(IntPtr dest, int c, int count);

        [NativeLibraryBinding(PlatformID.Win32NT, WindowsLibName)]
        public static readonly memsetDelegate memset = null;

Some libraries like OpenGL may optionally include certain entry points. I added support for this with a new "IsOptional" flag to the attribute. If the entry point is not found, the delegate remains null. This make it possible to test in code for its absence and results in a NullReferenceException if the calling code tries to use it.

        public delegate void glBufferStorageEXTDelegate (int target, IntPtr size, IntPtr data, int flags);

        [NativeLibraryBinding(PlatformID.MacOSX, LibName, IsOptional = true)]
        public static readonly glBufferStorageEXTDelegate glBufferStorageEXT = null;

(Note: Unlike the previous examples, the call was not actually tested)

What are the performance implications of replacing extern with static delegates?

That is a question I asked from the start too. I will eventually benchmark. Since there is no other option available to me, the performance was irrelevant. It was a question of having code that worked or did not.

The next milestone is to create a function that automatically converts old DllImport code to this new format. Once I have that in place, I can test against real world use. I will also be in a better position to provide over-all (does it have any noticeable impact?) benchmarking information.

Got the conversion program going and the first issue to present is that DllImport allows function overloading but each delegate and field must have a unique name. This mainly impacts convenience functions where a C# friendly DllImport is added using the same name with a different signature.

Simple benchmark

100,000 Iterations each of memcpy (Release)
Time in seconds

DllImport
-------
0.0034425
0.0033392
0.003338
0.0033484
0.0033807
0.003338
0.003338
0.003338
0.0033419
0.003338
0.003338
0.003338
0.0033553
0.0033392
0.0033384
0.003338
0.003338
0.0033469
0.0033377
0.0033377


Delegate
--------
0.007208
0.0071596
0.0072514
0.0059994
0.0054384
0.005438
0.0053585
0.0054384
0.0054341
0.0054384
0.005438
0.0054218
0.0054384
0.0054384
0.0054303
0.0054384
0.005438
0.0054825
0.0054403
0.005438

I integrated the prototype with some projects and the results are good. Therefore, I feel confident suggesting the following proposal:

Add a new Attribute to CoreCLR called NativeLibraryImportAttribute. This new attribute takes the place of DllImport in cross-platform code. DllImport should remain for compatibility reasons until such time it can be [Obsolete].

The NativeLibraryImportAttribute takes the same form as the current DllImportAttribute adding a single, mandatory platform specifier as well as allowing for multiple instances of it (the attribute):

[NativeLibraryImport (PlatformID, LibraryName)] is its minimum form. It is undefined behavior to specify multiple attributes with the same PlatformID for a single import (a possible future use could be to allow for specifying alternate names on a single platform) . All the current optional properties of DllImport should be included such as EntryPoint and CallingConvention. The runtime should process only the attribute that matches the actual PlatformID that it is running on, ignoring all others. If no PlatformIDs match the runtime environment, an exception should be raised in the absence of a mapping mechanism. Additionally, DllNotFoundException should be replaced with a more appropriately named exception such as NativeLibraryLoadException.

A future amendment to this proposal should specify a mechanism and format for loading [NativeLibraryImport] attributes externally. This mechanism should be end-user accessible. Any item specifying a [NativeLibraryImport] is considered to be eligible for mapping. Maps are only consulted in the event a PlatformID that matches the current runtime environment is not found. If after consulting a map, a match for the current platform can not be found, an exception should be raised. A map consists of an Export Name (the same as specified by 'extern'), PlatformID and LibraryName minimally and optionally includes any or all of the [NativeLibraryImport] properties.

I forgot to explicitly mention that with [NativeLibraryImport], there is no name mangling or substitution performed. The library name specified is the name sent to the dynamic linker for loading. This means the name should include any platform prefix and suffix expected.

[edit]
Also should note the entrypoint binding behavior is the same as that of DllImport (lazy). An export that is never called by the runtime is never resolved.

[edit2]
PlatformID should be an independent enum specified and versioned for use with [NativeLibraryImport] only. This will allow for rapid adoption of new values to meet needs not anticipated by a more generic platform specifier.

Got the conversion program going and the first issue to present is that DllImport allows function overloading but each delegate and field must have a unique name. This mainly impacts convenience functions where a C# friendly DllImport is added using the same name with a different signature.

Not in my experience. When you have a native function that takes a pointer to a struct as input, in which NULL is also a valid value, you need two overloads that both have the DllImport attribute. One takes the relevant argument as ref MyStruct and the other takes it as IntPtr. If the delegate-based code won't work for this pattern, it's going to break a lot of native imports.

When you have a native function that takes a pointer to a struct as input, in which NULL is also a valid value, you need two overloads that both have the DllImport attribute.

Maybe I am just not explaining things well. Yes, real world code has the same extern with different signatures. That is currently possible with [DllImport] and would also be possible with the proposed [NativeLibrary]. [DllImport] is what we have today, [NativeLibraryBinding] is the prototype created to explore a solution to this issue, and [NativeLibrary] is the resulting proposal for inclusion into CoreCLR.

As of today, I plan on evolving [NativeLibraryBinding] as my end solution. The real world testing does not show any statistically meaningful impact on workloads that are not API 'bound'. This will hopefully divorce me from the frustration, drama and neglect surrounding this issue.

If the delegate-based code won't work for this pattern, it's going to break a lot of native imports.

The situation was discovered as a result of testing with real world code. The problem is no different than that faced by C libraries as they too can not have overloads. The resolution is to change the name in some way as C++ does (name mangling). For this test, I simply added an index ordinal: someDelagate, someDelegate2, someDelegate3 to represent the different delegate and field names. The "EntryPoint" property is used to ensure they all call the same function.

it's going to break a lot of native imports.

A lot of native imports are ALREADY currently broken with [DllImport]. The information I presented was done so in good faith. It would be nice to see CoreCLR adopt a solution to this issue; however, its no longer of consequence if it does not. Based on the assumption that new APIs do not make it into .Net overnight, I am skeptical of CoreCLR including any resolution to the matter other than the current name mangling and see-what-doesnt-fail approach of [DllImport].

@OtherCrashOverride
I like the way this evolves. However, I'd always prefer anything but XML for the configuration. ASP.NET 5 went through great lengths to avoid the entire XML stack if not explicitly needed by the application and this would bring that back on non-Windows platforms using Kestrel.

I too would like a non-XML solution. In that regard, the initial [NativeLibrary] proposal does NOT specify any external mapping. XML was used in my prototype to have something to test external mapping design with.

It should also be noted that its possible for code to include both [DllImport] and [NativeLibrary] attributes on the same export. This gives us the maximum backwards compatibility. In the CoreCLR case, it should prefer the [NativeLibrary] attribute. On older runtimes, the new [NativeLibrary] attribute will be ignored.

This is an example of how [NativeLibrary] and [DllImport] would be used together:

[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[NativeLibrary(PlatformID.Win32NT, "msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[NativeLibrary(PlatformID.Unix, "libc.so.6", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
public static extern IntPtr memset(IntPtr dest, int c, int count);

The following can fall back to [DllImport] in the absence of a [NativeLibrary] for a specific platform.

[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[NativeLibrary(PlatformID.Unix, "libc.so.6", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
public static extern IntPtr memset(IntPtr dest, int c, int count);

All right, that example makes a lot of sense. So what's all the delegate stuff above about, then?

So what's all the delegate stuff above about, then?

It was a prototype to prove the design concept and get to [NativeLibrary]. Its not a proposal for CoreCLR (by me).

[NativeLibrary] is [DllImport] with ..
1) Added platform specifier that explicitly states what the author intended.
2) Multiple instances. You can have more than one attribute to specify the import for more than once platform.
3) Elimination of name mangling. The library name specified is explicit. The runtime will never try to interpret or process it. It is passed as-is to the dynamic linker.

The following are equivalent:

[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[NativeLibrary(PlatformID.Win32NT, "msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]

The following may not be equivalent:

[DllImport("msvcrt", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[NativeLibrary(PlatformID.Win32NT, "msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]

The following will fail with a NativeLibraryLoadException:

[NativeLibrary(PlatformID.Win32NT, "msvcrt", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]

LGTM

@OtherCrashOverride So, for fun, I've done an experiment with autogenerating the delegates and such using Roslyn and a DNX precompiler (the end result can be used (in theory at least) by any framework, but it needs to be built using DNU).

It allows me to create a class like this:

    abstract class FooBar : NativeMethods
    {
        [NativeMethod("somelib", CallingConvention.Cdecl, BestFitMapping = false)]
        public abstract void Test();

        [NativeMethod("otherlib", CallingConvention.Cdecl)]
        public abstract int OtherTest(bool foo);
    }

And it generates an implementation class at compiletime:
image

The method GetNativeMethodDelegate you see in the screenshot is virtual (defined in NativeMethods, so you can override it in FooBar to change the binding logic to whatever you want it to be, and it's lazy binding just like [DllImport].

The inheritance constraint is an obstacle in some places. A class may already be inheriting from something that also happens to have some P/Invokes. An immediate example that comes to mind was in the prototype where I have a NativeLibrary base class and Win32NativeLibrary/PosixNativeLibrary inheriting from that. Each subclass specifies its own P/Invoke into the appropriate dynamic loader.

@OtherCrashOverride generally, if you need that, what I do is just create a separate class with just the P/Invoke, and have an instance of it in your class. I don't ever see a reason why you need the P/Invoke apis in any given class, as it should never use this. I mean, normally, with [DllImport] I always make the methods static anyways. And they are always private/internal.

The impact to existing codebases was also taking into consideration. With the proposed [NativeLibrary] attribute, many codebases can be converted over with a find/replace.

Find: [DllImport(
Replace: [NativeLibrary(PlatformID.Win32NT,

@OtherCrashOverride I did not give this as a solution. I gave this as a "I can make stuff work while the politicians discuss what possible solution they could maybe want sometime in version 3" ^^

@OtherCrashOverride I like the idea behind your proposal. I think it should be built as a tool or library decoupled from the core runtime.

Ideally, it should generate code at compile time as @Alxandr suggested. Generating code at compile time is preferred because of it has better runtime startup performance and it tends to be more AOT friendly.

We have been working towards decoupling most of the interop engine from the core runtime, and moving it into independent component - MCG (Marshalling Code Generator). More details about how MCG works is in this blog post written by @yizhang82. We plan to open source MCG in future.

Multiple different interop code generators can co-exist. Each project can choose the interop technique that fits best its needs. SWIG, or SharpDX interop generator are examples of different interop code generators that exist today.

We are open to adding helper APIs into the core runtime that are hard or impossible to build independently. An example of such APIs is ICastable that allows COM types to participate in casting, without having the intricate COM casting logic in the core runtime. Let us know if you see a need for more APIs in this category.

Ideally, it should generate code at compile time as @Alxandr suggested. Generating code at compile time is preferred because of it has better runtime startup performance and it tends to be more AOT friendly.

I think there is some miscommunication here. I explicitly stated:

I should also note that a requirement is that it be AOT compatible. So I have avoided doing IL patching or on-the-fly generation from stubs.

The [NativeLibraryBinding] was a prototype to explore the real world needs of this issue. It does not generate any code. It does not alter or produce any IL. Behind the scene it uses Marshal.GetDelegateForFunctionPointer which ultimately call into the runtime itself
https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Runtime/InteropServices/Marshal.cs#L2522

The prototype using delegates exhibits three undesirable facets. 1) __arglist is impossible to use. 2) Delegates can not be overloaded. 3) Its slower than [DllImport].

This leads to the actual proposal which is a modernization of the current exisiting [DllImport] support in the runtime. This new behavior is associated with a new attribute called [NativeLibrary]. It is meant as a [DllImport] replacement utilizing the existing code in the runtime for [DllImport] updated to support the new behavior. Its simply an evolution of [DllImport] with consideration given to future mapping needs.

After reading the MCG article, it does not appear that it will meet the needs of this issue. It relies on the dynamic linker reading the library binding from the .dll as any other native .dll would. This still leaves us with the dilemma of HOW that binding should be specified (msvcrt.dll vs libc.so.6).

SWIG and SharpDX perform an entirely different function. They both generate the [DllImport] from an external file such as a C header file. This is not the problem this issue is trying to solve.

For PInvokes, MCG takes high-level description of the PInvoke specified as DllImport and generates low-level code for it. For example, if MCG sees:

[DllImport("mydll.dll")]
extern static public void foo();

It can generate C# code like (the code fragment is similar, but not what exactly happens today):

static IntPtr p_foo = IntPtr.Zero;

public static void foo()
{
    if (p_foo == null)
        p_foo = init_foo();

     // Marshalling code if there are any arguments

     // Intrinsic that is expanded into calli IL instruction by .Net Native toolchain
     CalliIntrinsics.Call(p_foo);    

     // Unmarshalling code if there are any return values or cleanup
}

private static IntPtr init_foo()
{
    IntPtr libraryHandle = McgHelpers.LoadLibrary("mydll.dll");
    return McgHelpers.GetProcAddress(libraryHandle, "foo");
}

This C# code is then linked into the .dll that uses the PInvoke.

The same general pattern is used by other interop code generators that I have mentioned: take a concise high-level description of methods that need interop, generate a pile of low-level boilerplate code from it.

For your interop code generator, the generated code can be similar. E.g. the generated code for this specification:

[NativeLibrary(PlatformID.Win32NT, "msvcrt.dll", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
[NativeLibrary(PlatformID.Unix, "libc.so.6", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
extern static public void foo();

Can be something like - using delegates underneath:

public static void foo()
{
    if (p_foo == null)
        p_foo = init_foo();

     p_foo();
}

delegate void foo_delegate();

static foo_delegate p_foo = null;

private static void init_foo()
{
    if (NativeLibraryHelpers.IsWin32NT)
    {
        IntPtr libraryHandle = NativeLibraryHelpers.LoadLibrary("msvcrt.dll");
        IntPtr p = NativeLibraryHelpers.GetProcAddress(libraryHandle, "memcpy");
        return Marshal.GetDelegateForFunctionPointer<foo_delegate>(p);
    }
    if (NativeLibraryHelpers.IsUnix)
    {
        IntPtr libraryHandle = NativeLibraryHelpers.LoadLibrary("libc.so.6");
        IntPtr p = NativeLibraryHelpers.GetProcAddress(libraryHandle, "memcpy");
        return Marshal.GetDelegateForFunctionPointer<foo_delegate>(p);
    }
    throw new PlatformNotSupportedException();
}

As you have noticed, the delegates have some overhead compared to raw PInvoke. This overhead can be avoided by using calli intrinsic like MCG generated code. Unfortunately, CoreCLR does not have the CalliIntrinsics readily available today. We can explore what it would take to add it. If it is done right, it should fix the problem with calling vararg functions as well.

@Alxandr The abstract/virtual isn't really needed. You could just replace the existing class with a rewritten one using Roslyn.

@Tragetaschen That (I think) would be problematic with regards to how you write it. Like, do you just stub the native methods? The way I'm doing it I don't have to alter anything, I just create new classes. Simpler IMHO.

I think that the hard part in both approaches (creating new overrides vs. filling in implementation for existing methods) is the need to invoke Roslyn twice: first to identify methods that are annotated with the attributes, and second time to compile it for real with generated interop.

Ideally, Roslyn would support code producers/rewriters to make this simpler in future (similar to how it does code analyzers today).

Performance wise - creating new overrides will be slower at runtime because of virtual call overhead.

This discussion, though educational, has somewhat lost its way. The basis of this issue is:

[DllImport("mydll.dll")]
public static extern void foo();

If that is compiled into a CoreCLR managed program call "Bar.dll". How do I make that work on a non-windows platform?

An easy method for library authors to specify a non-windows platform mapping is desired. Additionally, an easy way for end-users to specify non-windows platform mappings is desired.

Any type of code generation makes this a more difficult problem to solve for both parties considering the following: 1) the author may not support the end-user's target platform. 2) The end-user may not have access to the author's source code. If there is code generation involved, it needs to be as transparent as [DllImport] is today.

The compromise proposed is:

[NativeLibrary(PlatformID.Win32NT, "mydll.dll")]
public static extern void foo();

This give the library author the ability to explicitly support platforms. Additionally, it provides the information and behavior necessary for a mapping mechanism to allow the end user to specify the mapping. The justification for this is presented earlier in this lengthy discussion.

Its import that this be a standard and uniform part of the runtime. This is necessary to allow a 3rd party ecosystem of libraries to thrive. Additionally, it simplifies parts of CoreCLR itself (search the codebase for [DllImport]).

While I am thinking about it ...

Some libraries include the bitness in the name:
"SomeLibrary32.dll" and "SomeLibrary64.dll".

Two ways to support this immediately come to mind:

PlatformID should be an independent enum specified and versioned for use with [NativeLibraryImport] only. This will allow for rapid adoption of new values to meet needs not anticipated by a more generic platform specifier.

We can have PlatformID.Win32NT and PlatformID.Win64NT.

or

It is undefined behavior to specify multiple attributes with the same PlatformID for a single import (a possible future use could be to allow for specifying alternate names on a single platform) .

[NativeLibrary(PlatformID.WinNT, "SomeLibrary32.dll")]
[NativeLibrary(PlatformID.WinNT, "SomeLibrary64.dll")]
public static extern void foo();

I am not sure which would be preferable.

@jkotas du has this capability. I transform the Roslyn compilation object _during_ build. No msbuild magic.

I am not sure which would be preferable.

I would prefer the way that makes it unambiguous. WinNT 32 bits is a different OS than WinNT 64 bits, even though they're mostly compatible. The bitness of associated DLLs is one specific way in which they are _not_ compatible.

One simple solution would be to have PlatformID be a [Flags] enum, allowing you to specify the same name for multiple similar architectures. Certain groups could even be predefined for convenience, like WinNT = WinNT32 | WinNT64. This would, however, limit the number of supported platforms to 64. Any realistic estimates as to how likely that is to be a problem?

@OtherCrashOverride @masonwheeler please note that PlatformID won't come to .NET Core, it was replaced with the OSName APIs: https://github.com/dotnet/corefx/pull/1494

You guys are just skimming now, aren't you? :wink:

PlatformID should be an independent enum specified and versioned for use with [NativeLibraryImport] only. This will allow for rapid adoption of new values to meet needs not anticipated by a more generic platform specifier.

@akoeplinger The intent of PlatformID was to be unique to [NativeLibrary]. It has the same name as System.PlatformID simply due to that is what the prototype type used. It should probably be renamed to distinguish it. In the prototype it served as a level of abstraction to allow translation from either full .Net/Mono PlatformID or CoreCLR OSName.

The reason it is independent was to account for unforeseen needs to distinguish a platform. A possible example is the various BSDs. The family may be BSD, but the flavor could any one of many:
https://en.wikipedia.org/wiki/List_of_BSD_operating_systems

This answers @masonwheeler 's question: No, 64 platforms (divided by 2 for 32/64bit) is not a realistic limit.

We should probably just add an optional flag to the [NativeLibrary] attribute that distinguishes native word size if needed:

[Flags]
public enum NativeWordSize
{
  Bits32 = (1 << 0);
  Bits64 = (1 << 1);
  Any = (Bits32 | Bits64);
}

The default value would be NativeWordSize.Any. This coincides with managed code's flag that specify 32bit, 64bit, or AnyCpu.

Example usage:

[NativeLibrary(PlatformID.WinNT, "SomeLibrary32.dll", NativeWordSize = NativeWordSize.Bits32]
[NativeLibrary(PlatformID.WinNT, "SomeLibrary64.dll", NativeWordSize = NativeWordSize.Bits64]
public static extern void SomeLibraryExport(IntPtr thisChanges);

[NativeLibrary(PlatformID.WinNT, "SomeOtherLibrary.dll"]
public static extern void SameLibraryNameForAnyBitSizeExport();

@OtherCrashOverride ah yeah, I got confused by it having the same name, sorry. That said, I don't like having yet another enum that specifies OS/Platform, it's bad enough we have the split of PlatformID on Desktop.NET and OSName on Core. Your prototype is very interesting though, I'm eager to see what the end result will look like :smile:

@Alxandr If possible - could you please share a links to code that shows how you transform Roslyn compilation object during the build?

@OtherCrashOverride I agree that this feature is very useful for implementation of certain cross-platform libraries. However, it is important that it is implemented as separate component - it does not prevent it from being commonly used by libraries and becoming de-facto standard.

Historically, the classic .NET Runtime was huge monolitic box that has done many different things. .NET Core is a fresh start on factoring .NET into number of small separate components that can be evolved independently, and shipped frequently. We are trying to be very intentional about the architecture boundaries in .NET Core.

This was a long thread to read, but I think (in my mind at least), there is a logical conclusion: should not be part of the runtime, instead as a pattern or practice. I say this because of all the permutations/combinations that would need to be put inside the runtime to cover all cross-plat cases is work that most likely will fall short in some bizarre situation anyway!

@OtherCrashOverride -- are you not satisfied with the solution you proposed? Or are you saying that it is burdensome? Can you elaborate on a scenario that will not be solved by having an external code generator, be it something based on Roslyn during build or generating a delegate?

FYI, someone seems to have already had a similar idea and written one over a year ago ... https://github.com/Giorgi/Dynamic-PInvoke

However, it is important that it is implemented as separate component

mscorlib uses [DllImport] itself. Its not really practical to make it a separate component when the runtime itself depends on it.

are you not satisfied with the solution you proposed?

As previously stated

The prototype using delegates exhibits three undesirable facets. 1) __arglist is impossible to use. 2) Delegates can not be overloaded. 3) Its slower than [DllImport].

Can you elaborate on a scenario that will not be solved by having an external code generator, be it something based on Roslyn during build or generating a delegate?

Having an external code generator makes the problem worse. In addition to having the source code, an end user is now also required to have the specific generator used. The difference is going from adding a simple DllMap file, as Mono does today, to having full build tools and source code for every library used. Its not unrealistic to expect that each library will require its own code generator in addition to its own library dependencies.


This issue also affects Microsoft as evident in dotnet/coreclr#1257. The difference is that Microsoft simply changes the behavior of [DllImport] to meet their needs. If each person maintaining an incompatible fork of CoreCLR with their desired behavior of [DllImport] is the solution, then lets go with that. All that is asked is that an OFFICIAL statement to that effect be given.

I think a substantial factor in this issue is that CoreCLR is seen as a "necessary evil" (something that you do not like but which you know must exist or happen) to get ASP.Net out the door. It would seem that CoreCLR's development is only seen through ASP.Net glasses. So let me be explict about this: There is a desire to use CoreCLR where ASP.Net will not be present. There is a desire to use CoreCLR where DNX will not be present.

Having a standard, agreed upon, resolution to the issues facing [DllImport] is very important. As previously stated, not even mscorlib can exist without it. [DllImport] was designed and implemented when cross-platform meant "any Windows platform". CoreCLR moves outside that constraint and so must [DllImport].

I concur that whatever solution is presented needs to be 'built-in'. Its important that it be deterministic and consistent for everyone. If everyone 'rolls their own as needed using hooks', then library providers will not be able to offer support for anything other than their own 'loader' which may not be compatible or offer the features of a customers proprietary 'loader'.

That is the scenario that we are trying to avoid by addressing this issue. Yes, everyone can make their own fork of CoreCLR, or their own codegenerator, or their own loader. The problem is that instead of one single way for an end-user to get COMPILED code working on their target platform, there are now many: Instead of O(1), its now an O(n) problem.

So once again I will ask: if a pull request for the [NativeLibrary] enhancement of [DllImport] proposed is submitted, will it be accepted (assuming its a clean patch)? Its ok to say "No". Its ok to say "We will revisit this issue sometime after release." Anything other than silence is acceptable. This is not an attempt to force a position. Its simply that while this issue has been idle, the world continued to turn and there are other schedules that must be aligned with.

@jkotas on mobile, so getting and sharing links is somewhat more tricky than usual, but here is a implementation and usage.

I just wanted to second what @OtherCrashOverride said about how CoreCLR != ASP.NET. There are plenty of other reasons why someone would want an open-source, cross-platform CLR implementation even if they never do any web development work. So please don't make the mistake of thinking that CoreCLR is "an ASP.NET thing," when to many (most?) of the community, it isn't.

I don't think suggestions pointed out by @jkotas or me suggest that ASP.NET == CoreCLR, it is well-identified shipping vehicle though.

I think a substantial factor in this issue is that CoreCLR is seen as a "necessary evil" (something
that you do not like but which you know must exist or happen) to get ASP.Net out the door. It would > seem that CoreCLR's development is only seen through ASP.Net glasses. So let me be explict
about this: There is a desire to use CoreCLR where ASP.Net will not be present. There is a desire
to use CoreCLR where DNX will not be present.

This is not true. Neither is CoreCLR considered a necessary evil, nor is this specific issue tied to any of that discussion at all.

Having an external code generator makes the problem worse. In addition to having the source code,
an end user is now also required to have the specific generator used. The difference is going from
adding a simple DllMap file, as Mono does today, to having full build tools and source code for
every library used. Its not unrealistic to expect that each library will require its own code generator in
addition to its own library dependencies.

It's quite clear in my mind that this can be implemented without CoreCLR runtime's co-operation, and that is a good thing. Imagine the feature of DLLImport attribute didn't exist, and you have to call into helper code that sits inside another library. One would do what you're suggesting. I mean the runtime today does something quite similar for PInvoke Stubs anyway.

As @jkotas has said, CoreCLR is a new beginning of sorts, and while DLLImport is useful in helping the core framework porting effort maybe for truly cross-plat libraries we can say one needs to implement a solution at the library authors end. And @jkotas is reiterating that any API needing to be exposed to make it easier to do this will be a candidate for inclusion in corefx. The pattern we setup here can be widely adopted and pushed for as "official" or a "really good guidance".

So once again I will ask: if a pull request for the [NativeLibrary] enhancement of [DllImport]
proposed is submitted, will it be accepted (assuming its a clean patch)? Its ok to say "No". Its ok to > say "We will revisit this issue sometime after release." Anything other than silence is acceptable.
This is not an attempt to force a position. Its simply that while this issue has been idle, the world
continued to turn and there are other schedules that must be aligned with.

I understand your question, but I don't think we've settled on a path forward right? @jkotas can definitively answer, but it seems like this belongs to an external component and not inside CoreCLR.

Imagine the feature of DLLImport attribute didn't exist, and you have to call into helper code that sits inside another library.

Why? As he already pointed out, it _does_ exist, and it's even _used inside mscorlib itself_. In light of that, making it external would be crazy.

Btw; if anyone want to take a look at my code generating for dealing with native imports (still just a WIP), it is tracked here: https://github.com/YoloDev/YoloDev.Dnx.Utils/pull/1

Neither is CoreCLR considered a necessary evil, nor is this specific issue tied to any of that discussion at all.

Earlier in this discussion it was stated that DNX and NuGet should handle this.

Imagine the feature of DLLImport attribute didn't exist, and you have to call into helper code that sits inside another library.

As part of the ECMA 335 specification

I.9.3
Unmanaged code
It is possible to pass data from CLI managed code to unmanaged code. This always involves a
transition from managed to unmanaged code, which has some runtime cost, but data can often be
transferred without copying. When data must be reformatted the VES provides a reasonable
specification of default behavior, but it is possible to use metadata to explicitly require other
forms of marshalling (i.e., reformatted copying). The metadata also allows access to unmanaged
methods through implementation-specific pre-existing mechanisms.

II.23.1.1 0
flags for methods [MethodAttributes ]
PInvokeImpl 0x2000 Implementation is forwarded through PInvoke

This is a feature for the runtime as stated by the standard.

@OtherCrashOverride

if a pull request for the [NativeLibrary] enhancement of [DllImport] proposed is submitted, will it be accepted

Implementation built into the core runtime will not be accepted. We will be happy to suppport implementation built as separate component.

@masonwheeler

it's even used inside mscorlib itself. In light of that, making it external would be crazy

It is not that crazy. The expanded code for PInvoke can be generated using separate tool even for mscorlib. (BTW: It is what we are doing in .NET Native for all PInvoke marshalling.)

Implementation built into the core runtime will not be accepted. We will be happy to suppport implementation built as separate component.

Thank you.

@Alxandr

YoloDev/YoloDev.Dnx.Utils#1

I am looking forward what you are going to come up with. It would be useful to try it out on some large existing piece of code, like what @OtherCrashOverride has done for his earlier prototype.

@jkotas the tools you talked about, that generate the marshaling code, is not open source (yet)?

Correct, the MCG tool is not open source (yet).

Are there any methods in the framework that does crossplatform loading of native assemblies and getting pointer-addresses for functions, or are all these hidden and internal (and windows only)?

There are not right now. https://github.com/dotnet/coreclr/issues/937 is related.

Some concepts for cross-plat support can be borrowed/inspired from mono, Ruby's foreign function interface: https://github.com/ffi/ffi/wiki, node.js FFI proposal: https://github.com/nodejs/node/pull/1750 (they are currently dealing with it the hard way: https://github.com/nodejs/node-gyp/issues/629).

According to Wikipedia (_yes wikiepdia is still a thing.._):

Java refers to its FFI as the JNI (Java Native Interface) or JNA (Java Native Access)

which maybe another source of inspiration in bringing cross-platform P/Invoke in CoreCLR.

Seems like the new CLI tools brings back .config support for .NET Core. Can be used for dllmapping if it ever happens.

https://github.com/dotnet/cli/commit/91acc03a137c4dd6d44ca17aa7ce2078824cc672

This issue was raised Mar 5. Its now Dec 11 (9 months later). Pull requests are not accepted and no further guidance has been provided. I don't think its unfair to say "this train will not be arriving at the station any time soon."

Its now simply a question of deciding whether to fork the release and modify it or just move on to something else like Swift. Short term, developers will probably want a modified runtime thats useful outside of an ASP.net environment. Long term, its probably a lot less work to migrate away.

@OtherCrashOverride Sorry for not replying sooner. I'd like to reiterate @jkotas 's point that we are not going to accept pull request into the runtime for this particular feature and we'd like to see if we can add this into MCG, which will be plugged into the CLI tools to provide seamless integration, as if it were a runtime feature. We are working hard to make sure MCG works on all the interesting platforms, such as .NET Native, CoreRT, CoreCLR, and we are also planning to start the work on MCG open sourcing as well (we are already make significant progress on opening sourcing part of the support helper library of MCG). MCG will be the future platform where new functionalities like this to be added. I'll share more updates when we are ready to share them.

@OtherCrashOverride Have you look at the CLI tools? It's no longer tied to web development, I really like it. In regards to dllmapping, I implemented a basic solution for my own cross-plat project like 5 months ago that works on all CLR implementations, it simply uses dlopen/LoadLibrary directly, then obtains function pointers (and convert them to managed delegate instances) using dlsym/GetProcAddress. I am still interested in this issue because it would still be nice if it was something that is part of the .net core development framework.

In-case people don't know what yizhang82 means with MCG: http://blogs.msdn.com/b/dotnet/archive/2014/06/13/net-native-deep-dive-debugging-into-interop-code.aspx

I implemented a basic solution for my own cross-plat project like 5 months ago that works on all CLR implementations, it simply uses dlopen/LoadLibrary directly, then obtains function pointers (and convert them to managed delegate instances) using dlsym/GetProcAddress.

As noted in this thread, I did the same thing. The issues with it are also noted: 1) no ability to overload a delegate and 2) a delegate is slower than p/invoke. 3) more code to implement than a simple [DllImport].

MCG does not solve this issue, it simply obfuscates it. How, using MCG, do I specify different library names for different platforms without the need to recompile/ifdef for each of them as is currently necessary for [DllImport] ?

[edit]
MCG - Marshal Code Generator appears to do just that: reduce managed types to primitive types. The end call still appears to be done with [DllImport]. As such, it inherits this issue.

@xanather re:

Seems like the new CLI tools brings back .config support for .NET Core. Can be used for dllmapping if it ever happens.

That's only for full .NET Framework, not .NET Core.

@OtherCrashOverride MCG is going to be the interop technology for all runtime interop implementations, such as CoreCLR, .NET Native, CoreRT, etc. CoreCLR today does implement interop (pinvoke, COM interop, WinRT) in the VM for historical reasons, but that's not where we would like to be in the future. You can think MCG as a tool that magically understands interop constructs (any attributes, metadata flags/tables, interop methods, etc) and provide the implementation for them (instead of the runtime) as part of the compilation (.NET native toolchain, C# compilation, etc). As a matter of fact, .NET native does not implement any interop other than the most basic support for pinvoke/calli/[NativeCallable] for primitive types.

Yes, MCG doesn't do what you want today, but it provides the platform for such functionality to be built into as it provides the real C# implementation for your pinvokes (as well as COM interop and WinRT methods), and that's the perfect opportunity to provide more customization in the LoadLibrary/dlopen policy, which can be customized either through attributes or config files (it's not immediately clear to me what's the best policy here. We need to do more investigation).

The biggest reason that we are not going to accept potential PRs into CoreCLR for this particular proposal is because we believe CoreCLR is not the right place to plug in such policies going forward - MCG is the preferred solution. The runtime should only support the most basic interop building blocks and the real interop features/policy should be left to MCG.

Hope this clarifies the direction that we are going. Feel free to let us know if you have more questions.

MCG doesn't do what you want today

provide more customization in the LoadLibrary/dlopen policy, which can be customized either through attributes or config files (it's not immediately clear to me what's the best policy here. We need to do more investigation)

That is the point my previous comment made. This issue has just moved from [DllImport] to MCG. It has not been resolved.

To clarify, I do not require any of the functionality that MCG provides. I am capable of marshaling managed types to primitive types with what is available today. Marshaling is not of any concern to this issue. Even if [DllImport] only supports primitive types as mentioned, that is fine. When this issue is solved for MCG it is also solved for [DllImport] as its the exact same issue. Which brings us to the heart of the matter which is: lets solve the issue and everyone is happy!

Again, for clairity:

which can be customized either through attributes or config files

That is all we need to know to make [DllImport] work. We don't need MCG. We don't need to modify Roslyn. We simply need to know which custom attributes or configs files should be present and what their effect should be.

I understand the purpose of MCG and the need for it in AOT scenarios. There is no need to advocate it or explain it. However, MCG should not be put forth as a solution to this when, in fact, MCG also needs this issue solved.

The point of this issue is to define the follow:

which can be customized either through attributes or config files

Should we open a new issue for MCG?

Instead of the current issue titled "Handling p/invokes for different platforms and discussions about dllmap", I can create an issue titled "Handling p/invokes for different platforms and discussions about MCG".

@OtherCrashOverride Thanks for the suggestion. Let's keep this thread active for now as this thread has a lot of valuable discussion and insights, and the issue is not resolved yet. Once MCG is open-sourced, I'll migrate all the related interop issues over to the new place.

It might be time to reconsider this since we have a unified host now.

In response to needs for the LLVMSharp project and this issue I have written a PInvokeCompiler where we take an assembly as input that contains PInvokeImpls and generates an assembly that is expanded with marshalling information and also make it aware of xplat situations via an attribute. The attribute is here : https://www.nuget.org/packages/NativeLibraryAttribute

The idea is that you can specify different module refs (dll name) & entry points (function name) based on a Platform Identifier and Pointer size.

The tool is in infancy but does support the ideas brought up here except Native Long (which I still think is something the native bindings should take care of or not expose to consumers), but if there is a genuine benefit, i.e. it does in fact ease the development of a wrapper, then it's possible to include it.

Also given the overlap that is likely going to exist with MCG, I'm going to work with @yizhang82 to see how we can rationalize all of this into a single great tool for .NET rather than multiple tools.

@mjsabby Absolutely. Let's chat offline and see how we can work together to move this forward.

The attribute is here : https://www.nuget.org/packages/NativeLibraryAttribute

Is there source code for it? Nuget is forbidden in many environments due to security constraints.

except Native Long (which I still think is something the native bindings should take care of or not expose to consumers)

Could you provide example code usage of how you envision this working? Or is the solution still to write two different bindings/assemblies for the same API?

I don't understand why it's possible to participate in managed loading via an event, but not unmanaged. An event similar to the Resolving event for managed assemblies would be great鈥攍et me know when the runtime is attempting to load an unmanaged library and allow me to (attempt to) provide my own handle to the loaded native library. If all of the event handlers return IntPtr.Zero (or indicate failure in some other way), fall back to the "native" built-in loading.

On Linux I found a practical workaround for this issue, which does not involve modifying the DllImports at all.

The idea is to generate a stub shared library that is named like the Windows DLL and contains a reference to the real library name for Linux. The DllImport in .NET code remains unchanged and uses the Windows DLL name (without .dll suffix). The .NET core native loader will then load the stub shared object. This action invokes the Linux dynamic loader (ld.so) which then resolves the dependency of the stub on the real library and automatically maps all symbols from the real library into the stub.

To generate a stub library do the following:

touch empty.c
gcc -shared -o libLinuxName.so empty.c    
gcc -Wl,--no-as-needed -shared -o libWindowsName.so -fPIC -L. -l:libLinuxName.so
rm -f libLinuxName.so

The result can be checked using the readelf command:

$ readelf -d libWindowsName.so
Dynamic section at offset 0xe38 contains 22 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libLinuxName.so]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000c (INIT)               0x4b8
 0x000000000000000d (FINI)               0x600
...

Each stub is very small (8 KB) and thus can easily be included in cross-platform NuGet packages.
It is probably possible to generate the stub library with a simple invocation of ld since no compilation is actually involved, but I was not able to figure out the correct arguments to do that.

A real world example of this technique can be seen at https://github.com/surban/managedCuda/tree/master/StubsForLinux

@surban +1, note that the library cannot have a dot in the name. The loader stops guessing at the first dot.
Another problem is that the names would conflict for *nix systems. You cannot place both Linux/OSX bits together because the loader guesses the same name under the platforms. A workaround is to dynamically release the binary before the first P/Invoke call.

Please take a look here for an actual example:
https://github.com/Microsoft/GraphEngine/blob/master/src/Trinity.Core/Trinity/Runtime/TrinityC.cs

@yizhang82
Has MCG been open-sourced? If not, where is it and how can we use it?

Interop with native libraries on .Net Standard proposal.

We have the C libraries provided by third party on Linux and Windows platforms. The library names doesn鈥檛 follow the naming convention. Say library names are linuxlib.so and winLib.dll. We want to create the .Net Standard project that uses this libraries. We are using the DllImport attribute. The challenge is how to load a correct library for the OS platform. In the .Net Standard 2.0 there is no support for this. I want to propose a simple enhancement to the .Net Standard that will solve this problem.

The code below shows how a user code will look like. The map between OS and library is added to the DllImportMap class.
The DllImport attribute excepts not only a library name, but a map name too.
///


/// This class is a part of the assembly that uses the C libraries on Windows and Linux platforms
/// The map should be added to the static constructor of the class that has declarations of the external C functions
///

public class ExternalFunctions
{
static ExternalFunctions()
{
DllImportMap.AddMap("MapName", new Dictionary { { OSPlatform.Windows, "winlib.dll" }, { OSPlatform.Linux, "linuxlib.so" } });

}

private const string LibName = "MapName";

[DllImport(LibName)]
public static extern int CFunc(string val);

}

The code below should be added by Microsoft to the .Net Standard

///


/// This class should be implemented by Microsoft
/// It should be added to the same assembly and namespace as the DllImportAttribute class.
/// It allows adding a map that describes which C library should be used on which platform.
/// Multiple maps can be added.
/// The DllImportMap is a static class.
///

public static class DllImportMap
{
private static Dictionary> masterMap = new Dictionary>();
public static void AddMap(string key, Dictionary map)
{
if (masterMap.ContainsKey(key))
throw new Exception(string.Format($"Key {key} already present in the masterMap"));
masterMap.Add(key, map);
}

internal static Dictionary<OSPlatform, string> ByKey(string key)
{
  Dictionary<OSPlatform, string> map;
  masterMap.TryGetValue(key, out map);
  return map;
}

}

///


/// It assumed that Microsoft has a class that process the DllImportAttribute.
/// A new method DllNameAnalyzer should be added. It processes the dllName parameter of the DllImportAttribute
///

public class DllImportProcessingClass
{
///
/// The dllName parameter of the DllImportAttribute now can accept a map name.
/// This method checks if a libName parameter is a map key and retrieves a library name corresponding to the OS platform
///

/// A name of the library or a key in the map
///
private string DllNameAnalyzer(string libName)
{
if (string.IsNullOrWhiteSpace(libName))
return libName;
// first check if the libName is a key in the masterMap
Dictionary map = DllImportMap.ByKey(libName);
if (map == null) // not a key, so it is a library name
return libName;

  // what is an OS platform?
  OSPlatform platform;
  if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
    platform = OSPlatform.Windows;
  else if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
    platform = OSPlatform.Linux;
  else
    platform = OSPlatform.OSX;

  // retrieve a library for the OS, or null if not specified
  string foundLibName;
  map.TryGetValue(platform, out foundLibName);
  return foundLibName;
}

}

<dllmap> in Mono is not implemented with a dependency on System.Configuration, it is implemented independently of that stack.

@efimackerman What you propose sounds like dotnet/corefx#17135; if that's the case, would you mind "upvoting" that issue (thumbs up icon below the issue description)?

@qmfrederik I don't see how my proposal is similar to the one you have mentioned in your comment.

I've recently released a library which solves most (if not all) of the aforementioned issues, and bundles a cross-platform delegate-based approach into a simple and easy-to-use APIs, which also supports Mono DllMaps, as well as custom library search implementations. The backend is superficially similar to the proposed API in dotnet/corefx#17135, but is abstracted away for ease of use.

On top of that, it allows some more interesting extensions to the P/Invoke system, such as direct marshalling of T? and ref T?.

https://github.com/Firwood-Software/AdvanceDLSupport

I would like to see DllMap (or an equivalent) supported in addition to https://github.com/dotnet/corefx/issues/17135.

I voiced the 'issues' I have with the alternative here: https://github.com/dotnet/corefx/issues/17135#issuecomment-374248831

@jeffschwMSFT Look about 3 years up this issue's conversation thread for a bunch of discussion about trying to emulate Mono's XML DllMap system and why that's a very bad idea. Let's please not resurrect that now.

trying to emulate Mono's XML DllMap system and why that's a very bad idea

:+1:

@jeffschwMSFT Based on a quick read of early discussion, I would also recommend against DLLMap. It is reusing a design which was conceived for a different purpose.

So, how do I do this now in .NET-Core ?
I have SharpFont which loads freetype via dllmap, and it doesn't work on .NET Core.
I have to manually alter the dllimport attribute to filename (and it only works with fullpath+filename ?

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <dllmap dll="freetype6" os="linux" target="libfreetype.so.6" />
    <dllmap dll="freetype6" os="osx" target="/Library/Frameworks/Mono.framework/Libraries/libfreetype.6.dylib" />
    <dllmap dll="freetype6" os="freebsd" target="libfreetype.so.6" />
</configuration>

And why does .NET load native assembly functions with compile-time attributes anyway, when it then goes forth to actually dlsym them at runtime when invoked ?
LoadLibrary & dlopen are written in C, and even then they aren't that unflexible.

And why having that feature anyway ?
It would have been much better to have a cross-platform wrapper around LoadLibrary/dlopen and GetProcAddress/dlsym, so people couldn't do it this way in the first place.

Right now, as I see it, I need to create a static class with a lot of delegates, then do the dlopen and dlsym manually, to replace all freetype-internals - that is, if I don't want to create 3 versions of the assembly for each platform because the shared-object-name is different.

And why can't dllimport take a static function as its attribute argument, in addition to a const-string ?
Name-picking could then be done in a user-defined fashion, and this issue would be solved, and all old-code would still work.

Since attribute doesn't exactly allow for a callback, here's an example of what I mean:

namespace NetStandardReporting
{


    // [AttributeUsage(AttributeTargets.Class | AttributeTargets.Interface, AllowMultiple = false, Inherited = true)]
    [System.AttributeUsage(System.AttributeTargets.Method, Inherited = false)]
    public class DynamicDllImportAttribute
        : System.Attribute
    {
        protected string m_dllName;


        public string Value
        {
            get
            {
                return this.m_dllName;
            }
        }

        public string EntryPoint;
        public System.Runtime.InteropServices.CharSet CharSet;
        public bool SetLastError;
        public bool ExactSpelling;
        public System.Runtime.InteropServices.CallingConvention CallingConvention;
        public bool BestFitMapping;
        public bool PreserveSig;
        public bool ThrowOnUnmappableChar;


        public DynamicDllImportAttribute(string dllName)
            : base()
        {
            this.m_dllName = dllName;
        }


        private static System.Type CreateDelegateType(System.Reflection.MethodInfo methodInfo)
        {
            System.Func<System.Type[], System.Type> getType;
            bool isAction = methodInfo.ReturnType.Equals((typeof(void)));

            System.Reflection.ParameterInfo[] pis = methodInfo.GetParameters();
            System.Type[] types = new System.Type[pis.Length + (isAction ? 0: 1)];

            for (int i = 0; i < pis.Length; ++i)
            {
                types[i] = pis[i].ParameterType;
            }

            if (isAction)
            {
                getType = System.Linq.Expressions.Expression.GetActionType;
            }
            else
            {
                getType = System.Linq.Expressions.Expression.GetFuncType;
                types[pis.Length] = methodInfo.ReturnType;
            }

            return getType(types);
        }


        private static System.Delegate CreateDelegate(System.Reflection.MethodInfo methodInfo, object target)
        {
            System.Type tDelegate = CreateDelegateType(methodInfo);

            if(target != null)
                return System.Delegate.CreateDelegate(tDelegate, target, methodInfo.Name);

            return System.Delegate.CreateDelegate(tDelegate, methodInfo);
        }


        protected delegate string getName_t();

        public DynamicDllImportAttribute(System.Type classType, string delegateName)
            : base()
        {
            System.Reflection.MethodInfo mi = classType.GetMethod(delegateName,
                  System.Reflection.BindingFlags.Static
                | System.Reflection.BindingFlags.Public
                | System.Reflection.BindingFlags.NonPublic
            );

            // System.Delegate getName = CreateDelegate(mi, null);
            // object name = getName.DynamicInvoke(null);
            // this.m_dllName = System.Convert.ToString(name);

            // System.Func<string> getName = (System.Func<string>)CreateDelegate(mi, null);
            // this.m_dllName = getName();

            getName_t getName = (getName_t)System.Delegate.CreateDelegate(typeof(getName_t), mi);
            this.m_dllName = getName();
        }


    } // End Class DynamicDllImportAttribute 


    public static class DynamicDllImportTest 
    {

        private static string GetFreetypeName()
        {
            if (System.Environment.OSVersion.Platform == System.PlatformID.Unix)
                return "libfreetype.so.6";

            return "freetype6.dll";
        }


        // [DynamicDllImport("freetype6")]
        // [DynamicDllImport(typeof(DynamicDllImportTest), nameof(GetFreetypeName))]
        // [DynamicDllImport("foo", CallingConvention = System.Runtime.InteropServices.CallingConvention.Cdecl)]
        [DynamicDllImport(typeof(DynamicDllImportTest), nameof(GetFreetypeName), CallingConvention = System.Runtime.InteropServices.CallingConvention.Cdecl)]
        public static string bar()
        {
            return "foobar";
        }


        // NetStandardReporting.DynamicDllImportTest.Test();
        public static void Test()
        {
            System.Reflection.MethodInfo mi = typeof(DynamicDllImportTest).GetMethod("bar",
                  System.Reflection.BindingFlags.Static
                | System.Reflection.BindingFlags.Public
                | System.Reflection.BindingFlags.NonPublic);

            object[] attrs = mi.GetCustomAttributes(true);
            foreach (object attr in attrs)
            {
                DynamicDllImportAttribute importAttr = attr as DynamicDllImportAttribute;
                if (importAttr != null)
                {
                    System.Console.WriteLine(importAttr.Value);
                }
            } // Next attr 

        } // End Sub Test 


    } // End Class 


} // End Namespace 

@ststeiger You're SOL with normal .NET Core. However, I've made a library the solves practically all of the issues mentioned in this thread. Might be worth checking out: https://github.com/Firwood-Software/AdvanceDLSupport

@Nihlus:
Nice, and by using interfaces, one could use DependcyInjection on native libraries.
Definitely worth a look.

I already made a similar thing, long ago (loading oracle native-dlls in asp.net and mono).
However, changing the current source-code of DllImportAttribute to the one of DynamicDllImportAttribute would be far easier for porting old code.

No backwards-compatibility problems whatsoever.
With this revision, no Linq required either, so even compatible with .NET 2.0.

You could now even fetch the dll-name from a database.
Now would be funny to store the delegate into a member variable, and change value to:

    public string Value
    {
        get
        {
            return getName();
        }
    }

Then you could theoretically even use a different freetype version per tenant in the database depending on HttpContext (domain), if available ;)
(assuming the signature and all else stays the same)
Would be a bit hacky, but it would work.
But I guess that would give a mess with hmodule.
Assuming it loads the delegate every time, which it probably doesn't.

What we would need now is extension-attributes with extension properties, to retroactively add this to the full .net framework without changing its source.

It would have been much better to have a cross-platform wrapper around LoadLibrary/dlopen and GetProcAddress/dlsym

Yes, this is exactly what we plan to do. Tracked by https://github.com/dotnet/corefx/issues/32015.

Closing this issue, the matter is tracked by dotnet/corefx#32015.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

terrajobst picture terrajobst  路  193Comments

iSazonov picture iSazonov  路  139Comments

ebickle picture ebickle  路  318Comments

ghost picture ghost  路  230Comments

jamesqo picture jamesqo  路  206Comments