When writing a framework that dispatches to user methods it's quite common to reflect over all methods of a particular shape on an object and store a MethodInfo for later use. The scan and store is typically a one time cost (done up front) but the invocation of these methods happen lots of times (one or many times per http request for example). Historically, to make invocation fast, you would generate a DynamicMethod using reflection emit or generate a compiled expression tree to invoke the method instead of using methodInfo.Invoke. ASP.NET does this all the time, so much so that we now have a shared component to do this (as some of the code can be a bit tricky).
With the advent of newer runtimes like project N and coreRT, it would be great if there was first class support for creating a thunk to method. This is what I think it could look like:
```C#
class Program
{
// First class delegate type in the CLR use to express the stub that calls into the
// underlying method
public delegate object Thunk(object target, object[] arguments);
static void Main(string[] args)
{
var methods = typeof(Controller).GetMethods();
var map = new Dictionary<string, Thunk>();
foreach (var methodInfo in methods)
{
map[methodInfo.Name] = methodInfo.CreateThunk();
}
// Invoke the method index (result is null)
var result = Invoke(map, typeof(Controller), "Index");
// Result is 3 (yes things get boxed)
result = Invoke(map, typeof(Controller), "Add", 1, 2);
}
private static object Invoke(Dictionary<string, Thunk> map, Type type, string method, params object[] arguments)
{
var target = Activator.CreateInstance(type);
return map[method](target, arguments);
}
}
public class Controller
{
public void Index()
{
}
public int Add(int a, int b)
{
return a + b;
}
}
```
Few notes:
You are really just asking for faster MethodInfo.Invoke - to make it precompiled, not interpreted. It does not need a new API. This performance improvement can be done without introducing a new APIs.
Sounds good. That would solve world hunger. Can you elaborate on why methodinfo.Invoke is slow today? What's the difference between Delegate.CreateDelegate() -> delegate.Invoke vs methodInfo.Invoke?
MethodInfo.Invoke has to marshal the argument values from the boxed objects and put them into registers for the method to call. It is done via interpreter today. You can speed it up by precompiling the marshalling. It is essentially the same thing as what you are doing with expression trees today, except that it would be done in corelib using reflection.emit or something similar.
@jkotas Would it make sense to change the existing methodInfo.Invoke? Compatibility concerns etc?
You can speed it up by precompiling the marshalling
Is there any concern with storing the "precompiled" state?
I'd like to play around with this 😄
Would it make sense to change the existing methodInfo.Invoke?
It does not make sense to introduce new APIs to fix perf issues in the existing APIs. Otherwise, we would end up over time with Invoke, InvokeFast, InvokeFaster, InvokeFastest, ... .
Compatibility concerns etc?
Yes, it takes extra care to ensure that you do not break anything.
Is there any concern with storing the "precompiled" state?
I did not mean to AOT compile, just compile it in memory on demand. It can be regular DynamicMethod.
You can take a look how it is done in CoreRT: https://github.com/dotnet/corert/blob/master/src/Common/src/TypeSystem/IL/Stubs/DynamicInvokeMethodThunk.cs
I did not mean to AOT compile, just compile it in memory on demand. It can be regular DynamicMethod.
I wasn't talking about AOT compilation, I was just asking if storing a dynamic method or whatever the extra state was on the MethodInfo would be a problem.
You can take a look how it is done in CoreRT: https://github.com/dotnet/corert/blob/master/src/Common/src/TypeSystem/IL/Stubs/DynamicInvokeMethodThunk.cs
Thanks! Now I just need to grok it and turn it into C++.
BTW are you suggesting we generate a dynamic method (via whatever mechanism is available) when creating method infos or would be a first time you invoke thing? (probably the latter)
What's the difference between Delegate.CreateDelegate() -> delegate.Invoke vs methodInfo.Invoke?
<shameless plug> This intrigued me as well, so I wrote a few blog posts about it, see Why is reflection slow? (section 'How does Reflection work?') also How do .NET delegates work? has some related info
I saw that MethodInfo Invoke also has to do security checks and parameter validation every time, which I guess adds to the cost
Thanks! Now I just need to grok it and turn it into C++.
You do not need to. It can be in C# just fine.
whatever the extra state was on the MethodInfo would be a problem
RuntimeMethodInfo has 12 fields already. Adding a 13th one should not be a problem; and if it is a problem - some of the existing 12 fields can be folded together.
Would this be something that is opt-in or the default behaviour?
There may be some scenarios where the extra overhead of time needed to create the delegate and then space to store it, could outweigh the cost of just using methodInfo.Invoke()
Also something similar was asked before, see dotnet/runtime#6968
Right, it would need to be done only once the runtime sees the method invoked multiple times to be beneficial.
There may be some scenarios where the extra overhead of time needed to create the delegate and then space to store it, could outweigh the cost of just using methodInfo.Invoke()
Thats why I prefer the 2 stage approach, getting the method info then preparing it to get something back that is optimized for invocation. You pay that cost explicitly and up front. It also gets around any compatibility concerns around methodinfo.invoke.
Right, it would need to be done only once the runtime sees the method invoked multiple times to be beneficial.
We do something similar in our dependency injection system. After 2 invocations, we fire off a background thread for compilation so the next time it's faster.
Thats why I prefer the 2 stage approach
Then this does not need to be built into the runtime. It can be regular upstack nuget package, e.g. the Thunk method can be extension method.
Part of the reason for baking this into the runtime was so that the implementation could adapt on AOT platforms...
nuget package can adapt on AOT platforms too.
I think the hard work in https://github.com/aspnet/Common/blob/rel/2.0.0/shared/Microsoft.Extensions.ObjectMethodExecutor.Sources/ObjectMethodExecutor.cs could be turned to a NuGet package meanwhile.
Especially since that is dreaded internal, which one can't use without copy & pasting.
@Ciantic Wouldn't Reflection.Emit yield better performance than Expression, the latter of which is used internally in ObjectMethodExecutor?
@danielcrenna I am not a right person to evaluate the performance. However since I wrote that, I was given advice that those source packages are usable, one just have to include them in the own project. I think there is even a tool or command in dotnet to include those source packages in own projects.
There's a lot to unpack here, so I'll do my best to try. :)
First, if you know the signature of the method you're trying to invoke, the easiest codegen-free way to get a thunk to it would be via MethodInfo.CreateDelegate. So let's for now assume that this issue handles only the case where you need to handle methods of unknown signature.
The runtime offers multiple features now that weren't available when aspnet first started using ref emit back in the MVC 1.0 days. Back in MVC 1.0, one of our primary reasons for using ref emit was to avoid having exceptions wrapped in TargetInvocationException, as would be occur during a normal MethodInfo.Invoke call. However, the runtime now offers BindingFlags.DoNotWrapExceptions to provide more granular control over this behavior. This removes the need for ref emit as a vehicle for preserving the original stack trace.
For environments without access to ref emit or where ref emit is interpreted (thus potentially slow), the APIs described at https://github.com/dotnet/runtime/issues/25959 can be used to check for this condition. This allows callers to know ahead of time whether using ref emit at all would result in reduced performance.
There are other outstanding issues to improve the performance of MethodInfo.Invoke. See for instance https://github.com/dotnet/runtime/issues/12832. When that PR comes through, all consumers of reflection should benefit regardless of which entry point they used.
Finally, it appears the thunk mechanism described here is opinionated. At an initial glance it seems the desire is for this thunk _not_ to provide support for in / ref / out parameters or type coercion. It's also not really defined how overload resolution would work. This would ultimately have the effect of creating a parallel reflection stack whose behaviors don't necessarily match the existing stack's behaviors. We would not be able to guarantee that the behaviors of this parallel stack will match the behaviors people will want 5 years down the road as new programming paradigms are introduced. I don't think this is a direction we want to go in the runtime.
My recommendation would be to continue to use ref emit if you're in a suitable environment and if it meets your performance needs. If ref emit is unavailable, try using the standard reflection APIs and passing whatever flags are appropriate for your scenario. If you need _different_ behaviors than typical reflection allows, that's a good candidate for creating a standalone package which implements your desired custom behaviors.
I mean, we may as well close this issue if the answer is to keep using ref emit 😄 .
Well, ref emit isn't the only option. If you're performing AOT compilation, you could emit the thunk directly into the compilation unit.
public class MyController
{
public int Add(int a, int b)
{
/* user-written code */
}
internal static object <>k_CompilerGeneratedThunk_Add(object @this, object[] parameters)
{
return ((MyController)@this).Add((int)parameters[0], (int)parameters[1]);
}
}
The runtime could use standard reflection to discover these thunks and link to them via MethodInfo.CreateDelegate with the common signature (object, object[]) -> object.
Well, ref emit isn't the only option. If you're performing AOT compilation, you could emit the thunk directly into the compilation unit.
FWIW, .NET Native/CoreRT does pretty much that. Last time I was looking at it, reflection invoke in .NET Native was about 4x faster than CoreCLR.
The AOT generated stubs are a bit more complex than what's suggested here because they handle the annoying things like automatic widening, Type.Missing, and the like. It can be faster if we can avoid these conveniences.
I think the general idea behind this, but I could be wrong, was that MVC's
MethodExecutor isn't using Ref Emit, it's using expressions.
I replaced my use of it with Ref Emit and it is faster (on my machine, on
my benchmarks, etc.). Question is whether it's worth applying.
My feeling is yes, because it's an area where it's not really possible to
replace it externally without reinventing the wheel on a large portion of
the model binding, and because benefits here apply to everyone who uses MVC.
It's just as valid to say no, because it's fast enough already.
On Fri, Apr 17, 2020 at 3:07 AM Michal Strehovský notifications@github.com
wrote:
Well, ref emit isn't the only option. If you're performing AOT
compilation, you could emit the thunk directly into the compilation unit.FWIW, .NET Native/CoreRT does pretty much that. Last time I was looking at
it, reflection invoke in .NET Native was about 4x faster than CoreCLR.The AOT generated stubs are a bit more complex than what's suggested here
because they handle the annoying things like automatic widening,
Type.Missing, and the like. It can be faster if we can avoid these
conveniences.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/dotnet/runtime/issues/7560#issuecomment-615081745,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAA7USV3T3NIPO6NAFKYCKTRM753FANCNFSM4MKHYDAQ
.
--
Daniel Crenna
Conatus Creative Inc.
cell:613.400.4286
@MichalStrehovsky how are those stubs discovered at runtime?
Via the CoreRT-specific native AOT metadata.
Assuming https://github.com/dotnet/runtime/issues/45152 supersedes this.
Most helpful comment
You are really just asking for faster
MethodInfo.Invoke- to make it precompiled, not interpreted. It does not need a new API. This performance improvement can be done without introducing a new APIs.