I'm a new guy.
I've envisioned a SQL processing process, It is roughly as follows:
SQL Request -> Get Reponse -> Store Buffer -> ConvertTo TClass -> Return TClass
```C#
SqlCommand cmd = new SqlCommand(sql, conn, typeMappings);//typeMappings:SqlType->EntityType
TClass entity = cmd.EntityReader
```C#
public interface IDbCommand<T> {
T EntityReader<T>();
}
Is this proposal feasible?
This is the sort of task that you would usually use something like EF for. Is there a particular drive to have it at the lower level?
@Wraith2 Yes. EF and Dapper use Emit to handle the mapping of entity classes. If the Ado.net can handle it directly, the performance will be improved. On this basis, programmers can focus more on the development of other features.
I'm not sure that's true. Those libraries, and others, go to a lot of trouble to emit good code to give you fast mapping and at any other level the same work would have to be done to achieve the same performance. There is no secret sauce in the lower level ADO providers that would give them better performance doing the same work.
这个想法很好,原生支持reader to entity,还要dapper何用,支持
@guodf thank you.
support support 支持 支持
I completely agree with what @Wraith2 said.
We actually spent some time experimenting in https://github.com/aspnet/dataaccessperformance with alternative data access APIs in the hope that we could identify alternative interfaces that could enable building faster object materialization code in .NET, but we didn't find anything in the current ADO.NET APIs that fundamentally limits the performance of such code.
Aside from a few improvements we did identify that we are now tracking int this repo under the area-System.Data label, our current thinking is that:
Are better investment than trying to come up with completely new APIs for database access.
In fact, learning from those experiments and focus on (1) has helped us signidicantly improve database access performance. This is refected in the TechEmpower benchmarks: In round 17, ASP.NET Core using Npgsql is in the 6th position, compared to position 124 in round 15, about 8 months before.
Of course, anyone in the communtiy can create their own experiments and if any new information is dicovered through them, we can revisit this.
On the other hand, there are alternative motiviations for adding object materizliation capabilities in ADO.NET, like improve the usability of ADO.NET without requiring additional compoenents like Dapper, EF Core, or another O/RM. There are of course pros and cons to that, and it is a completely different discussion.
cc @roji, @ajcvickers, @sebastienros
Possible dupe of dotnet/runtime#25739 and agree very much with all that @Wraith2 and @divega wrote above. I do think there's motivation into looking at materialization capabilities inside ADO.NET, but not really for perf.
I understand that focusing on performance is not a reliable point. At present, it is more valuable to solve the impedance imbalance.
Closing as this is not something we plan to implement.
Most helpful comment
I'm not sure that's true. Those libraries, and others, go to a lot of trouble to emit good code to give you fast mapping and at any other level the same work would have to be done to achieve the same performance. There is no secret sauce in the lower level ADO providers that would give them better performance doing the same work.