Runtime: Feature Request: Variable-size "long" for simplified p/invoke

Created on 3 Oct 2018  Ā·  58Comments  Ā·  Source: dotnet/runtime

To greatly simplify p/invoke code (particularly struct definitions, but also method signatures), it would be amazing if there was a "long" type that matched the OS's compiler definition of long. This would mean no more preprocessor definitions, and a single assembly that works with native libraries compiled for multiple OSes.

Here's an example of what you have to do in .net right now to p/invoke ogg_packet, defined here:

C code:

typedef struct {
  unsigned char *packet;
  long  bytes;
  long  b_o_s;
  long  e_o_s;
  ogg_int64_t  granulepos;
  ogg_int64_t  packetno;
} ogg_packet;

C# code:
```c#
[StructLayout(LayoutKind.Sequential)]
struct OggPacket
{
IntPtr Packet;

if WINDOWS // long is always 4 bytes on Windows

int Bytes;
int BeginningOfStream;
int EndOfStream;

else // long matches the word length on Linux/OSX

IntPtr Bytes;
IntPtr BeginningOfStream;
IntPtr EndOfStream;

endif

long GranulePosition;
long PacketNumber;

}
```

Ugly, and I need to ship separate binaries for Windows and OSX/Linux.

To be clear, this is partly about word length and partly about what "long" means in compiled C code on Windows (vs basically every other OS):

Type | Windows 32-bit | Windows 64-bit | *nix 32-bit | *nix 64-bit
--- | --- | --- | --- | ---
C# IntPtr | 4 | 8 | 4 | 8
C# int (Int32) | 4 | 4 | 4 | 4
C# long (Int64) | 8 | 8 | 8 | 8
C long (no managed equivalent) | 4 | 4 | 4 | 8

https://software.intel.com/en-us/articles/size-of-long-integer-type-on-different-architecture-and-os/

I get that this is technically up to the compiler, not the OS, but I believe the dust has settled on these sizes since the shift from 16->32->64 bits.

api-suggestion area-System.Runtime.InteropServices

Most helpful comment

The (internal) attributes are not free to lookup. It is cheaper to just check the name if the number of affected types is small.

All 58 comments

Thinking some more about implementation, it might make the most sense to just go with something like [MarshalAs(UnmanagedType.CLong)] applied to a regular Int32 (or UInt32), like so:

c# [StructLayout(LayoutKind.Sequential)] struct OggPacket { IntPtr Packet; [MarshalAs(UnmanagedType.CLong] int Bytes; [MarshalAs(UnmanagedType.CLong] int BeginningOfStream; [MarshalAs(UnmanagedType.CLong] int EndOfStream; long GranulePosition; long PacketNumber; }

@EgorBo That refers to types that vary in size by word length (32-bit, 64-bit etc.). Different request but easy to confuse (I originally used the term "native long" but changed it to try and avoid this confusion).

....so why isn't the native code considered "buggy", in that it's using a type with a variable size definition? Especially since the next set of fields has an explicit size?

(Frankly, this is one of my least favorite C/C++ "features", for mostly this reason - that, and in almost all cases the far more interesting information is the range of the domain type, not the size of the physical type)

The merits of using long isnā€™t really the point.. my example is a pre-existing API already present on most Linux distributions. And it isnā€™t particularly obscure - I can provide other examples.

[MarshalAs(UnmanagedType.CLong] int EndOfStream;

This isn't sufficient. This would not allow the upper 32-bits to be used on 64-bit Linux.
Likewise, using long could cause a loss of data on Windows, if managed code were to set the upper 32-bits.

If this were to be handled, it would likely need to be done by an explicit type that the runtime will dynamically size as needed (much like IntPtr or Vector<T>)

I suppose thatā€™s potentially true, although in many (most?) cases the upper 32 bits are not particularly useful (considering that the code is assumed to work on platforms with 4 byte longs). Iā€™m curious if there is any new code being written with longs. My assumption is that their occurance in code is a legacy holdover from when 64-bit architectures were a pipe dream.

Totally not against a ā€œCLongā€ type, just playing this through. Any solution would be super helpful.

I would like to hope that most "modern/new" C/C++ code is being written to be cross-platform and cross-architecture aware and that they will use explicitly sized types (int32_t, int64_t, intptr_t, etc), rather than relying on the C/C++ language keywords (int, long, long long, etc). And looking around at various "newer" codebases that are explicitly meant to be cross-platform/architecture, they tend to do this (clang, llvm, vulkan, among others).

In an ideal world, many of these codebases (such as Ogg) would have been updated to use intptr_t instead of long when 64-bit computers first started coming out (and the difference between Unix and other platforms was realized). This would have ensured that both Unix and other platforms (such as Windows) had consistently sized structures. But that didn't happen, and left us with the current mess. Due to the differences in size, and common existence for older code-bases, I think its something that still needs some sort of answer, and having a specially recognized type that is the appropriate size on each platform seems like the best solution. This ensures that you have no potential for data loss (when dealt with properly) and gives you a single entry-point for the function interop, rather than needing to cross-compile everything.

[MarshalAs(UnmanagedType.CLong] int EndOfStream;
This isn't sufficient. This would not allow the upper 32-bits to be used on 64-bit Linux.

I think [MarshalAs(UnmanagedType.CLong] on int or long would be sufficient. It would work just like any other interop marshaling and throw exceptions when the data does not fit.

If this were to be handled, it would likely need to be done by an explicit type that the runtime will dynamically size as needed

I do not think a special casing like this would be worth it in this case.

I do not think a special casing like this would be worth it in this case.

This keeps coming up (the most notable likely being from @migueldeicaza: https://github.com/dotnet/coreclr/issues/963), and the only "correct" solution, today, is to cross-compile.

It would work just like any other interop marshaling and throw exceptions when the data does not fit.

Which requires you to actually marshal the data, rather than allowing pinning or blitting of the value on 64-bit. If you expect to support 64-bit, you have to use long (or intptr), because you can't necessarily guarantee that the input (from either side) is always less than 32-bits on non-Windows. This also means that Windows is less efficient/etc.

We could potentially just handle this at the framework level, with #ifdef and a Unix vs Windows specific implementation (much like we do for IntPtr).

dotnet/coreclr#963 is a different issue from this one. I think that paying the price to marshal the data is perfectly fine for CLong.

What kind of performance overhead are we talking about? Just the overhead of copying arrays of these things vs pinning, or would even my example be affected somewhat? Iā€™d be reluctant to implement anything that slowed my solution down noticeably. Would probably stick with cross compiling in performance critical code if that is the case.

would even my example be affected somewhat?

You example would get some extra instructions as well, in some cases at least.

MarshalAs(UnmanagedType.CLong) would have similar performance characteristics and limitations as passing bool over managed/unmanaged boundary. C# bool has to be converted to C++ bool, similar to how C# int or long has to be converted to C++ long here.

I think the best argument for not introducing a new type is that this issue relates to legacy / awkward code interop only. first-class support seems like overkill.

On the other hand, if adding the type isnā€™t too hard, the performance advantages might be nice. And itā€™s probably cleaner.

I will leave it up to you experts. both solutions are acceptable and Iā€™d just like to see something as soon as possible!

Just want to make sure the attribute would deal with sign extension.

I think the best argument for not introducing a new type is that this issue relates to legacy / awkward code interop only

+1

adding the type isnā€™t too hard, the performance advantages might be nice. And itā€™s probably cleaner.

Adding a first class type for this is complex. Many different parts of the system would have to know about it, depending on how much first class this type would be. I do not think it is cleaner from the architecture point of view at least. The interop specific constructs should stay within the interop subsystem.

Lets assume a second-class type then, living in InteropServices. I'm okay having to cast to/from Int32 or Int64 to do anything useful on the managed side. Often these are just opaque fields on a struct, and their job is to simply be the correct size. Or they get passed from one unmanaged method to another without touching the value.

Its only when casting to a managed type that the developer needs to worry about range, which is why it feels cleaner to me.

I'm asking myself if the correct porting to 64 bit would have to convert those field simply to the type int (or better int32_t) it seems on 64 bit OS the struct is now more bigger for no reason.

I don't know if:
[MarshalAs(UnmanagedType.CLong] long EndOfStream;

would be more clear in this case.

We also have some native library which we needed to interface.

It's written in C++, and exposes both a C and a C++ interface, and has a history of 20+ years. When porting to 64 bit, the developers of the library said that they could not retroactively change the long and unsigned long types into something like int32_t as it would break binary compatibility for C++ clients, so they rather live with the fact that they have different sizes and ranges between 64 bit platforms. (They support a vast range of platforms including "bare metal", VxWorks, etc... on a good dozen CPU architektures.)

So we had to bite into the apple and duplicated about 80% of all p/invoke calls and structs we use for interop, guarding all this with if (L64) in wrapper methods...

So, types or a MarshalAs option to marshal integers as native long sized would be very useful in our case...

I am working on cross platform interop for an older library as well and am running into this same issue. My preference would also be for the introduction of MarshalAs(UnmanagedType.CLong) as this would cleanly solve the issue.

@markusschaber are you able to point me to an example of how you are handling this at runtime with your if(L64) guards or provide a little more detail? I am interested in what it would take to support this in our solution.

@jkotas, would it be feasible for us to provide the following library only types?

namespace System.Runtime.InteropServices
{
    public struct CLong
    {
#if Windows_NT
        private int _value;
#else
        private nint _value;
#endif

        // Similar members to System.IntPtr
    }

    public struct CULong
    {
#if Windows_NT
        private uint _value;
#else
        private nuint _value;
#endif

        // Similar members to System.UIntPtr
    }
}

There would be no marshalling or runtime support just simple framework types (and utilizing the existing Condition="'$(TargetsWindows)' == 'true'" support for inclusion with CLong.Windows.cs and CLong.Unix.cs rather than the #if I used above).

The goal here would be to solve a few of issues:

  • Being able to define and pass along a c long equivalent even when it is part of another structure

    • Clang, which is a relatively modern API, has this problem for some types like CXUnsavedFile: http://clang.llvm.org/doxygen/structCXUnsavedFile.html

    • This can be particularly troublesome with deeply nested structs or if you are given back a pointer to a struct/array that contains c long

  • Having a "correctly" sized type that is blittable so you don't need to convert data, you just need to take size into account during reads/writes to that field
  • Having a common interchange type so each interop library doesn't need to define their own at the lowest level
  • Defining it as part of the core library where we already have the infrastructure to ship platform specific components

    • RID specific packages (using runtime.json in a NuGet) aren't officially supported

    • Having multiple RIDs in a single package is supported, but difficult to get right and isn't automatable in the SDK today (afaik)

There would be no marshalling or runtime support just simple framework types

I do not think you can get away without marshalling or runtime support if you want this to represent C long/ulong for interop faithfully. Struct wrapping an integer is not same as unwrapped integer in all ABIs.

Ah right, there are cases like COM where it isn't equivalent.
If there was a community contribution here do you think it would be something that would be considered?

I think it might be something worth some effort here as I don't think the difference will go away as even modern APIs use the type rather than something like intptr_t instead (Clang and LLVM are one example and there are other common examples like FreeType as well).
This isn't something that is easy to get right especially when factoring in packaging and subtle cases like wrapper structs not being equivalent.

For example, I've hit issues with trying to manually manage the difference in things like ClangSharp by defining CXUnsavedFile.Windows.cs and CXUnsavedFile.Unix.cx because it becomes pervasive in the worst cases.
Something which could just be a "this particular type has dynamic size like IntPtr" (and therefore blittable), you end up having to allocate, convert entire structs or arrays (even if only a single field in the struct is long), and then do it again when getting data back.

I think not being able to define a struct wrapper is particularly problematic as it means you can't define some general way to handle this. It forces you to redefine the same logic and forces you to incur the cost every time the native library uses long with no viable workaround outside not using the normal tooling (you have to cross-compile and manually package multiple assemblies which we don't have tooling support for).

That sounds reasonable to me.

Your suggestion says " Similar members to System.IntPtr". I do not think it is desirable - it would inherit the problems with IntPtr not really being first class integer type. I would make these types as simple as possible, only methods convert to / from primitive integer and the few other "mandatory" struct methods, something like:

readonly struct CLong : IEquatable<CLong>
{
    public CLong(nint value);
    public nint Value { get { return _value; } }

    public override string ToString();
    public override int GetHashCode();
    public override bool Equals(object o);
    public bool Equals(CLong other);
}

I am wondering whether this should be done together with #13788. The places you need to check for CLong/CULong and for #13788 in the runtime are exactly same for both.

I would make these types as simple as possible, only methods convert to / from primitive integer and the few other "mandatory" struct methods, something like

That sounds reasonable to me.

I am wondering whether this should be done together with #13788. The places you need to check for CLong/CULong and for #13788 in the runtime are exactly same for both.

That's actually what I was going to suggest. The feature work should be roughly the same, that is needing to interpret a user-defined struct as "transparent", but the latter will be more reusable and will unblock any similar scenarios in the future.

I did a rough prototype over UnmanagedType.TransparentStruct here: https://github.com/tannergooding/runtime/tree/fix-13788

The prototype basically treats TransparentStruct the same as Struct but only on blittable types. It then checks a flag in the relevant EmitMarshalReturnValue path such that it behaves identically to MARSHAL_FLAG_BLITTABLEVALUECLASS unless it is a type that would qualify to be returned as a U1/U2/U4/U8.

The new value and a corresponding CLong/CULong type would need to goto API review before being actually implemented/exposed. I could likely update #13788 to also include the following:

namespace System.Runtime.InteropServices
{
    public readonly struct CLong : IEquatable<CLong>
    {
        public CLong(nint value);

        public nint Value { get; }

        public override bool Equals(object o);
        public bool Equals(CLong other);

        public override int GetHashCode();

        public override string ToString();
    }

    public readonly struct CULong : IEquatable<CULong>
    {
        public CULong(nuint value);

        public nuint Value { get; }

        public override bool Equals(object o);
        public bool Equals(CULong other);

        public override int GetHashCode();

        public override string ToString();
    }

@tannergooding forgive me if I am not understanding but isnā€™t nint just an IntPtr sized value? If that is the case, how does the
CLong struct you describe work on Windows x64 where long is 4 bytes but IntPtr is 8?

The underlying field would vary and be either int or nint depending on if Windows or Unix, respectively.

The constructor and return value would always be nint and the constructor would need to throw on 64-bit Windows if the input is out of range.

Hmmm, does this mean that you would always have to use MarshalAs on CLong/CULong arguments? That feels clunky. Would it be better to make TransparentStruct to be a property of the type, not the property of the argument?

Would it be better to make TransparentStruct to be a property of the type, not the property of the argument?

I had offered that as an alternative proposal on 13788. I can see benefits to both (UnmanagedType being for types you dont own) but think that the attribute on the type would be the more used of the two. I hadn't chosen it as the main proposal since it isnt how other marshaling works today, it's easy enough to switch over though.

Edit Fixed sentence to indicate only one of the two would be implemented. I don't think we need both.

I would like any new interop marshaling features to work well with marshaling generated by source generators. Handling CLong/CULong via MarshalAs does not work well with that model.

I updated https://github.com/dotnet/runtime/issues/13788. The new proposal is:

namespace System.Runtime.InteropServices
{
    [AttributeUsage(AttributeTargets.Struct, Inherited = false)]
    public sealed class TransparentStructAttribute : Attribute
    {
    }

    [TransparentStruct]
    public readonly struct CLong : IEquatable<CLong>
    {
        public CLong(nint value);

        public nint Value { get; }

        public override bool Equals(object o);
        public bool Equals(CLong other);

        public override int GetHashCode();

        public override string ToString();
    }

    [TransparentStruct]
    public readonly struct CULong : IEquatable<CULong>
    {
        public CULong(nuint value);

        public nuint Value { get; }

        public override bool Equals(object o);
        public bool Equals(CULong other);

        public override int GetHashCode();

        public override string ToString();
    }
}

My initial suggestion is that the marshaler will recognize the type and will respect it if it is applied to a struct which is blittable and contains a single field.

If it contains more than one field or is not blittable the marshaler can either error or ignore it. I would imagine that ignoring it is easier and we could flag invalid usages with an analyzer, but I'm happy to have it error as well if we think that is better.

Can this be done directly without involving marshalers - make the JIT to use the right convention for these?

If there are special runtime-provided marshalers involved, it does not work for the interop marshaling generated by source generators.

@jkoritzinsky or @AaronRobinsonMSFT may know better, but AFAIK the decision for COM and DllImport is currently made in ILMarshaler::EmitMarshalReturnValue which is called via the NDirect::CreateCLRToNativeILStub paths we have.

For calli, I'm not familiar with where that decision is made.

Can this be done directly without involving marshalers - make the JIT to use the right convention for these?

I am not sure putting this special knowledge into the JIT is much better than embedding it in the marshaler. Source generators here seem like the win since users would be able to define their own semantics in any way they desire.

There is also the possibility that in a source generator world types could declare their own marshaling semantics. Consider the CLong case above. If a source generator existed that ran at compile time and searched for an attribute on types used in P/Invokes (e.g. MarshalInAttribute and MarshalOutAttribute), these attribute could contain tokens to methods that would defer the marshaling semantics to them during the P/Invoke.

Example:

    [MarshalInAttribute(CLong.MarshalIn)]
    [MarshalOutAttribute(CLong.MarshalOut)]
    public readonly struct CLong : IEquatable<CLong>
    {
        ...

        public static IntPtr MarshalIn(ref CLong l) { ... }
        public static CLong MarshalOut(IntPtr i) { ... }
    }

@tannergooding Yes that is the location for returning, but there is also in and byref. Is the return location of special interest here? I just discovered this issue and quickly read it so I could be missing something.

For calli, I'm not familiar with where that decision is made.

I just realized this is for the C# function pointer proposal. I believe that decision is in the JIT.

Source generators here seem like the win since users would be able to define their own semantics in any way they desire

With types like HRESULT, having source generators work is trivial. You define a single signature that takes/returns int and a wrapper signature that takes/returns a HRESULT wrapper. Likewise, if you have an array or the type nested in another, you can simply reinterpret cast the data and it should "just work".

However, for types like C/C++ long/unsigned long, it is not the same as the type is variable sized. That is, on Windows you need a signature that returns int and you need a signature that returns nint on Unix.

This means you need to define two native signatures (one taking int and the other taking nint) and that you have to find a way to support dispatching both. You can use if (IsOSPlatform) checks but those aren't currently JIT time constants and the alternative of cross-compiling isn't well supported. If you do happen to cross-compile, then you have to figure out how to ship it, which either involves shipping both in the same NuGet package and hoping NuGet resolves them correctly or trying to do what the runtime does with runtime.json, which isn't officially supported and has its own limitations.

The C/C++ long/unsigned long difference has been around for 17 years now and its still being introduced into new cross-platform APIs (including Clang/LLVM) so I don't think it will be going away.

Having a standard interchange type in the runtime at least solves the problem of shipping the type to users since we already build and ship platform/architecture specific libraries. The remaining problem ends up being that for instance methods on WIndows, returning T and returning struct S { T value; } isn't the same which leads users back to needing to produce and ship their own distro platform/architecture specific packages. Having TransparentStructAttribute helps solve that with very minimal changes required to the JIT/VM. I think it will build on the source generator experience and help plug a hole that will otherwise exist.

@tannergooding My point here is no changes are needed in the runtime at all to support this with the source generator proposal. The TransparentStructAttribute isn't needed and the types are what anyone wants them to be. Not against having the CLong or CULong types, but I don't see a real compelling reason when they are only used in interop and if the source generator was the only thing that used them I don't see a strong need to add them to the library rather just let the source generator implementation own them.

Example of source generator approach:

// Defined in Source generator assembly
namespace SrcGenPInvokeLibrary
{   
    [MarshalInAttribute(CLong.MarshalIn)]
    [MarshalOutAttribute(CLong.MarshalOut)]
    public readonly struct CLong : IEquatable<CLong>
    {
        ...

        public static IntPtr MarshalIn(ref CLong l) { ... }
        public static CLong MarshalOut(IntPtr i) { ... }
    }
}

// Using Source generator in user application
namespace User
{
    [GeneratedDllImportAttribute("NativeLib")]
    partial static SrcGenPInvoke.CLong DoubleMe(SrcGenPInvoke.CLong value);
}

In this scenario the esoteric types can be defined and owned by the tooling that will be dealing with the interop and kept out of the runtime. Of course there is the shared types issue so having them defined officially has a real compelling argument. However I would argue that P/Invokes having these argument types should be kept outside of public APIs which means they could be embedded into the assembly that is using the source generator.

partial static SrcGenPInvoke.CLong DoubleMe(SrcGenPInvoke.CLong value);

What is the source generator generated method going to look like for this? Is it going to work correctly on both Windows and Linux?

What is the source generator generated method going to look like for this? Is it going to work correctly on both Windows and Linux?

I don't see why it couldn't handle that kind of checking in the implementation of DoubleMe or even in a module initializer that is being proposed.

I'm not against the new types and honestly the attribute isn't really all that bad. My perspective here comes down to some of the reasons for the ComWrappers API. That was designed because there was literally no other way to do accomplish the performance characteristics for object identity and run managed code during the GC. Therefore we added a basic building block that would let users stay in managed code and accomplish what they wanted without adding a lot of new interop surface area. This is also similar to SuppressGCTransitionAttribute - no other way to accomplish the goal.

In this case it appears possible to address this issue with the source generators feature and that seems to be our desired path anyways. I just don't see the need and why source generators aren't sufficient at the moment - that is really why I am not embracing the proposal.

it couldn't handle that kind of checking in the implementation of DoubleMe

Yes, you could. @tannergooding 's point is that it is not pretty - even for the simple example. It would need to look like this:

[DllImport("NativeLib", EntryPoint="DoubleMe")]
extern int DoubleMe_Windows(int value);

[DllImport("NativeLib", EntryPoint="DoubleMe")]
extern nint DoubleMe_Unix(nint value);

SrcGenPInvoke.CLong DoubleMe(SrcGenPInvoke.CLong value)
{
    if (IsOSWindows)
    {
        int arg0 = (int)value;
        int retVal = DoubleMe_Windows(arg0);
        return new CLong(retVal);
    }
    else
    {
        nint arg0 = (nint)value;
        nint retVal = DoubleMe_Unix(arg0);
        return new CLong(retVal);
    }
}

Maybe it is ok since you will rarely look at this code. Making sure that IsOSWindows check can be elided by the JIT - that is required for this to be as efficient as possible - would be more generally useful.

However I would argue that P/Invokes having these argument types should be kept outside of public APIs which means they could be embedded into the assembly that is using the source generator.

That entirely depends on what the aim of the library is. There are some interop libraries who are merely providing what are essentially raw bindings with a few helper methods to handle array/string types. They are designed around exposing the raw API surface so that other libraries can use it to create their own managed wrapper or to provide barebones/no overhead access to the API.

// Defined in Source generator assembly

I wouldn't expect CLong to be in the source generator assembly either. The source generator is basically an analyzer, you shouldn't be taking a runtime dependency on it. It would need to be in some assembly that could be referenced by both.
However, in my own binding projects, I wouldn't want to take a dependency on some additional assembly as my interop bindings are meant to be near the bottom of the stack. I also wouldn't want to worry about having more than one CLong/CULong type to deal with or have to worry about packaging and distributing it (which I've tried and not been able to succeed at due to numerous other issues with producing RID specific packages).

My expectation with source generators is that, especially with interop bindings and the sheer number of attributes you may need to correctly annotate various bits of data (like whether a T* is a pointer to T, an input or an output, or an array), users are likely to define and conditionalize the appropriate attributes locally so they don't show up in the "Release" builds of their library. They are only important information to the tooling and would ideally be stripped in the final output (but given that source generators can't modify existing sources, it will have to be conditional attributes instead).

This would mean rather than taking a dependency on some SourceGenerator.Attributes.dll, the source generator nuget package would include some automatically included Compile items that define those attributes and are [Conditional("DEBUG")] (or similar).

I don't see why it couldn't handle that kind of checking in the implementation of DoubleMe

The problem with this checking is namely that IsOSPlatform is not a JIT time constant. This means you will incur the actual check (which is currently a string comparison) every time you need to go down this path.
We could (and possibly should) improve that, but it doesn't solve the issue with arrays or nested structs, as you can't simply take in user input, reinterpret cast, and pass it along. You have to actually marshal/convert the data (which can get very complicated) all for a primitive data type of the main interop language which should be blittable.

Yes, you could. @tannergooding 's point is that it is not pretty - even for the simple example. It would need to look like this:

Yes, and it gets worse when you have structs or arrays of structs containing long/unsigned long. For example:

struct CXUnsavedFile_Windows
{
    public sbyte* Filename;
    public sbyte* Contents;
    public uint Length;
}

struct CXUnsavedFile_Unix
{
    public sbyte* Filename;
    public sbyte* Contents;
    public nuint Length;
}

[DllImport("libClang", EntryPoint="clang_createTranslationUnitFromSourceFile")]
public CXTranslationUnitImpl* clang_createTranslationUnitFromSourceFile_Windows(void* CIdx, sbyte* source_filename, int num_clang_command_line_args, sbyte** clang_command_line_args, uint num_unsaved_files, CXUnsavedFile_Windows* unsaved_files);

[DllImport("libClang", EntryPoint="clang_createTranslationUnitFromSourceFile")]
public CXTranslationUnitImpl* clang_createTranslationUnitFromSourceFile_Unix(void* CIdx, sbyte* source_filename, int num_clang_command_line_args, sbyte** clang_command_line_args, uint num_unsaved_files, CXUnsavedFile_Unix* unsaved_files);

CXTranslationUnitImpl* clang_createTranslationUnitFromSourceFile(void* CIdx, sbyte* source_filename, int num_clang_command_line_args, sbyte** clang_command_line_args, uint num_unsaved_files, ????* unsaved_files)
{
    // ....
}

I don't have a common type I can use in the above. I have to either define a CLong/CULong type myself and deal with the creating multi-rid package(s) or I have to incur marshaling overhead by making one of the paths non-blittable

Okay. Having reread this thread and https://github.com/dotnet/runtime/issues/13788 I see where you are coming from. To be honest the compelling argument for me is the fields in types. That is hard to say the least and I can't come up with a solution that doesn't involve swathes of code and that feels like it defeats the desire to make interop fast.

I am not a fan of the attribute concept since it appears to have a single use case in practice even though in theory I guess it could be employed for other scenarios. At this point, I would prefer special casing the CLong and CULong types to behave as desired. Unclear if that preference will persist for how to solve this issue, but I have been convinced that source generators aren't going to be the best solution for the problem.

We can start small and just introduce the CLong types, without the general attribute. It sounds reasonable to me.

the decision for COM and DllImport is currently made in ILMarshaler::EmitMarshalReturnValue

It is where it is today, but it is not pretty and we keep finding new bugs in it. Related: https://github.com/dotnet/runtime/issues/12375 and https://github.com/dotnet/runtime/issues/33129

Once #12375 is fixed, there would be no changes needed in the IL stubs to support a "treat a struct as if it was a primitive for ABI reasons" type flag or attribute.

We can start small and just introduce the CLong types, without the general attribute. It sounds reasonable to me.

I think that is also reasonable. But, I think the attribute is a bit more generally useable and it is likely to more widely used than just CLong/CULong.
The immediately obvious and "required" scenario is indeed for CLong/CULong since we can't very well easily define a new primitive type (and we probably don't want to either, seeing as it's really for interop).

The more general use-case is to provide a general mechanism for the same in the future so that we don't need to go modifying the IL again. For example, the iOS/macOS environment also has nfloat which is float on 32-bit and double on 64-bit.
However, outside of a few scenarios (iWatch is one, iirc), Apple has largely dropped support for 32-bit so it isn't strictly needed and the scenarios where you have to support it as 32-bits will decrease over time (unlike CLong which applies to 64-bit and is a general C primitive that keeps getting used).
But, such an attribute would allow Xamarin to provide a type which is handled correctly without us needing to support it directly in the S.P.Corelib.

Likewise, it would be useable outside the strictly "required" scenario for things like HRESULT or other handles/typedefs where devs frequently define a wrapper type. WPF is one example and is why we need special handling for COM that ignores the correct ABI, such an attribute would allow us to make a "breaking change" where they can trivially attribute their types and maintain the same behavior.

Its also worth noting that we still generate significantly different codegen for returning int vs returning a HRESULT struct. It looks like we don't quite optimize the former cleanly (likely just part of the first class struct work): https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABABgAJiBGAOgCUBXAOwwEt8YaBhCfAB1YAbGFADKIgG6swMXAG4AsACgylWoxbtOASRYiIfcVCkz5y5cQDM5WNgAmEJoICe5XBigMwGcgAk6AKKiAKoAMgAq5ADeyuRxlNb+QWHhABSsLOQS2IIMMACUsfExSvFl5ABqOXnkALxZ1TCKpfEAvkVxHQnkGT5VuTDR5ADmMBhy5O1KUxbW1EiUAEzkXNFdANoAIoKC2vzQGKkARJsAGgDi2kdo5AEsUM4AChC9tUdcthgwZ5cAYtjeaDOa63BAAjCiPgwHYZYZ1cgePI3R6wXCSGCiVhw+qImA3cQYUJ4DABKBQaDwgBmOTR+QAul0+FBWNkvmoFjAEF8oEwepkAPofGDYL4/bT/QEPKipfLNMobACyYwAFhA7Hs+IJUkqMKr1fxBAB5PhsRy4GgAOQgukEGVh9MZzNZg3mfJ8gs+3wu4vBQMWMrqAD4EcryQB3chMBg7OXxDbbXb7KCHE7ekF3DxPF4sN5CkVev6+h7psHeSHQ21MbEIzx48go2TozHV3H4sZE9yk8lQKk0goMlpxJks/Ps8ic7m8pIhCLkD3C0XeiUYIGWGWxzqD8jrHV6jVa3dq/fG01Mc1Wm12qsOrfD51j6cpOd5xeFyXOFAB2rB3XhyPRwQN3ILorAfQIZ0iF8Cx9d9pUKLcSnKeJiAAdkjGAI0fCJUnnfMxWXIE4NlLopnlLdQNdLDIM9fCi2cf14LKRCkMoNCmAwvxwJSHCoNo98GOIrdSLjci5ioBYqJWGilzotdGOKLoylQ59pLfFcHjkoDhM3JSxIkrjZ14mT30/eS4mYpDlNw18YPUj91xI5QpiAA==

  • I'm also not quite sure why we are saving all the non-volatile registers when they aren't live, which seems a separate issue

nfloat

I think it would be fine to add nfloat along the same lines CLong. It needs to be bitness-specific anyway. We have number of number of Windows-specific core interop types, so having one Apple-specific core interop type would be reasonable .

The "not strictly required" scenarios can be handled by source generators. I would wait for where the source generators will bring us and then we check again whether the general attribute understood by the runtime is needed.

we still generate significantly different codegen for returning int vs returning a HRESULT struct.

I am not sure which part you mean. The only int vs. HRESULT inefficiency that I can see in your snippet is unnecessary RBP frame for HRESULT in CreateDXGIFactory3.

I'm also not quite sure why we are saving all the non-volatile registers when they aren't live, which seems a separate issue

This is required for PInvoke transition (digressing from what this issue about).

Likewise, it would be useable outside the strictly "required" scenario for things like HRESULT or other handles/typedefs where devs frequently define a wrapper type. WPF is one example and is why we need special handling for COM that ignores the correct ABI, such an attribute would allow us to make a "breaking change" where they can trivially attribute their types and maintain the same behavior.

Unfortunately it wasn't just WPF. This had external uses as well. I don't think we can ever fix that decision. However, if we could fix that decision by adopting the attribute I would be, after you, its biggest champion.

I think it would be fine to add nfloat along the same lines CLong. It needs to be bitness-specific anyway

Do you have a concern with implementing CLong/CULong (and possibly NFloat) using an internal TransparentStructAttribute, rather than hardcoding those types explicitly? It seems like its the more straightforward approach and allows one path to support both types.

I would like the special handling for these to not require marshaling stub, ie tell the JIT directly what to do when it matters; do not create a new type of marshaler for this. I do not think that the internal TransparentStructAttribute would help you much with doing it this way. I would just check for the types by name in MethodTableBuilder::CheckForSystemTypes.

Do you have a concern with implementing CLong/CULong (and possibly NFloat) using an internal TransparentStructAttribute, rather than hardcoding those types explicitly?

I personally just don't like the idea of a public attribute modifying a struct in this way. It seems like a mechanism that is going can be abused and then we wind up owning the semantics - similar to the struct HRESULT issue above.

However, starting out with a private attribute as an implementation detail of this feature with the option to make it public later is something I could get behind.

tell the JIT directly what to do when it matters; do not create a new type of marshaler for this.

I am reading @tannergooding's suggestion as wanting the trigger to be the TransparentStructAttribute name instead of any in the set { CLong, CULong, NFloat }. I agree we shouldn't have a new marshaler.

Right, my suggestion wasn't to add a new marshaler but was instead to keep TransparentStructAttribute as the "marker" for this support and to make it internal only for the time being.

@tannergooding I personally am okay with it being internal. I think @jkotas's point is that:

else if (x || y || z) { ... }

isn't much different than:

else if (a) { ... }

Regardless of how it is done, this conversation has convinced me it has value. I look forward to the PR šŸ˜‰

The (internal) attributes are not free to lookup. It is cheaper to just check the name if the number of affected types is small.

I think @tannergooding capture this conversation very well and has folded all the feedback into https://github.com/dotnet/runtime/issues/13788.

Closing as duplicate.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bencz picture bencz  Ā·  3Comments

sahithreddyk picture sahithreddyk  Ā·  3Comments

chunseoklee picture chunseoklee  Ā·  3Comments

jzabroski picture jzabroski  Ā·  3Comments

aggieben picture aggieben  Ā·  3Comments