As many of you are aware, the CLR / JIT / MSIL supports the definition and usage of native integers / unsigned integers since inception.
This "feature" of the runtime was never presented in any way to c# users, who instead have to use IntPtr / UIntPtr as close but not complete proxies.
Since .NET 4.0's CLR, it is possible to add / substract an integer from a (U)IntPtrs, and it is possible to do == / != comparisons with other (U)IntPtrs, but any other comparison operation is prohibited..., i.e. they cannot be compared with >, >= etc. to each other, so (U)IntPtrs remain very basic in the amount of pointer arithmetic they allow for...
These pointer sized integers come in handy when writing interop / low-level unsafe code and need to do pointer arithmetic...
Mono's runtime has supports a hack where two structs names nint and nuint are defined and through the use of the mono runtime are JIT'd into something similar to the basic CLR types.
It would be very nice if c# could expose those types for interop library writers, and perhaps even follow mono's naming of nint for native int and nuint for native unsigned int
If you want to use pointer arithmetic, can't you just use actual pointers instead of IntPtr/nint?
Related to the following issue in CoreCLR, submitted by Miguel de Icaza:
Naming the struct nint might be nice for brevity but for consistency I would prefer that the struct be named something like System.NativeInt and C# provide a keyword alias.
The Clr also already supports a native float type: http://stackoverflow.com/questions/13961205/native-float-type-usage-in-clr, it's called F, although I'm not sure what it actually produces when you compile it.
Sounds like IL has the metadata to support a native float but the CLR/JIT
doesn't actually support using it.
On May 15, 2015 4:15 PM, "Michael Burbea" [email protected] wrote:
The Clr also already supports a native float type:
http://stackoverflow.com/questions/13961205/native-float-type-usage-in-clr,
it's called F, although I'm not sure what it actually produces when you
compile it.—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/roslyn/issues/2788#issuecomment-102515308.
@svick I will answer your comment in the https://github.com/dotnet/coreclr/issues/963 issue opened by @migueldeicaza
@mburbea - F is a transient type in the verifier type system. It is not a real type in metadata sense, since no variable or location can have such type. Only in-flight results of computation can be of type F, which generally means "double or higher precision".
nint can be supported directly as a language primitive that maps to "native int". I wonder, though, what would be gained compared to a struct wrapper as in Mono. Pretty clever hack, actually.
nfloat, however, would run into problems with support on CLR level.
I've decided to reopen the issue because I think this is actually the right place to address this.
The CIL / CLR already supports the types, and the discussion in https://github.com/dotnet/coreclr/issues/963 clearly addresses two points:
native int would have beenThe main problem I have with all that was said thus far is that it makes no sense for users to go and write their own implementations of struct nint / struct nuint.
@svick asked a few comments ago why is this required, given that the user can simple use some sort of a native pointer type to mimic these native int types
@svick is partially right in his answer and partially wrong (or optimistic):
There's plently of examples of native int like code that could be written this way...
Here's and example of fully unrolled memcpy function in c# that accomplishes this by using a long * where a more correct native int would have probably should have been used:
``` c#
public class Mem
{
private static readonly int STRIDE = IntPtr.Size;
private static readonly int HALF_UNROLL_STRIDE = STRIDE * 2;
private static readonly int FULL_UNROLL_STRIDE = STRIDE * 4;
public static unsafe bool Cmp(byte* src, byte* dst, int len)
{
Contract.Requires(len >= 0, "Negative length in memcmp!");
var srcl = (long) src;
var dstl = (long) dst;
if (len >= FULL_UNROLL_STRIDE) {
do {
if ((dstl)[0] != (srcl)[0]) return false;
if ((dstl)[1] != (srcl)[1]) return false;
if ((dstl)[2] != (srcl)[2]) return false;
if ((dstl)[3] != (srcl)[3]) return false;
dstl += 4;
srcl += 4;
} while ((len -= FULL_UNROLL_STRIDE) >= FULL_UNROLL_STRIDE);
}
if (len <= 0)
return true;
if ((len & HALF_UNROLL_STRIDE) != 0) {
if ((dstl)[0] != (srcl)[0]) return false;
if ((dstl)[1] != (srcl)[1]) return false;
dstl += 2;
srcl += 2;
len -= HALF_UNROLL_STRIDE;
}
if ((len & STRIDE) != 0) {
if ((dstl)[0] != (srcl)[0]) return false;
dstl++;
srcl++;
len -= STRIDE;
}
src = (byte*) srcl;
dst = (byte*) dstl;
while (len-- > 0)
if (*dst++ != *src++) return false;
return true;
}
}
there's nothing wrong with this approach, as @svick pointed to, and while I personally don't like the fact that it looks and reads as a hack, it really does work the same way "it should" / "it would" if `native int` was available. Do note though that if there was a `native int` type, the code could have used `sizeof(nativeint)` (or whatever the c# name would be) and the JIT would probably enjoy that ability to fold those constants (instead of `static readonly` variables) in the the machine code...
The larger problem with this approach is when pointers are use in arithmetic operations:
Here's a very short example:
``` c#
static unsafe void Main(string[] args)
{
var p1 = (byte*)0x12345678;
var p2 = (byte*)0x87654321;
var x = p2 - p1;
Console.WriteLine("Pointer-Size: {0}, DiffSize: {1}", IntPtr.Size, x.GetType());
}
Now, guess what this program prints in 32 / 64 bit?
32 bit process:
Pointer-Size: 4, DiffSize: System.Int64
64 bit process:
Pointer-Size: 8, DiffSize: System.Int64
Now, what is the c# compiler supposed to do with the type of the variable x?
The c# compiler decided, quite conservatively, if I might add, that since it's not clear what will the pointer size be in runtime, the x needs to be a long, ALWAYS.
This means that any sort of interop code that needs to do size_t / ptrdiff_t sort of calculations will automatically by "upcasted" to 64 bit arithmetic, which is both slower for 32 bit processes and not exactly the same as the user might expect it to do...
This is the sort of nuanced difference, that when copy-pasting / porting code from C/C++ to C# gets the developer into a world of pain since it has the tendency of introducing very (read VERY) subtle bugs...
This, in short (well it wasn't short in reality) the reason I think that C# needs clear, and BETTER semantics of pointer-arithmetic and essentially, the best way for this, I think, is to introduce a native type that is ALREADY there in the CLR/CIL into the language itself, AND make C# actually use it by default for pointer arithmetic, and generally speaking when the users asks for it...
I have understood well that is the C# compiler does not permits arithmetic operation on IntPtr?
F# has nativeint but I don't know if additions, subtractions and so on are possible on it...
This is being done in C# 9.0. You can follow along at https://github.com/dotnet/csharplang/issues/435
Most helpful comment
This is being done in C# 9.0. You can follow along at https://github.com/dotnet/csharplang/issues/435