@ajcvickers personally, I'm less frustrated than I am amused. It appears that customer debate over the expected behavior is more the goal than customer (developer) expectations and/or satisfaction, based on the duration, depth and repetitiveness of these conversations over the years.
One can't help but wonder whether the cumulative time spent debating developers on the behavior could have resolved this issue 2 or 3 times over at this point.
I've recognized that the industry, as a whole, has a long way to go when it comes to properly asserting against a variety of these types of qualitative measures. We've all a long way to go. Thus, me creating my business to aid in this process.
I see what you were saying when you pointed to references within the object model. You're saying that Gen 2 GCs are not caused by the fixup behavior, that's a separate issue that should be addressed, as @AndriySvyryd mentioned.

End of last year I worked with the .NET Core team to resolve LOH allocations within String.Split. This issue was quickly addressed once it was determined the severity of the issue.
Operations such as this causing multiple LOH allocations are high severity. Are you saying that EF Core causing LOH allocations are of no consequence?
Again: Why isn't your existing performance testing process picking up these obvious GC issues?
_Originally posted by @windhandel in https://github.com/dotnet/efcore/issues/11564#issuecomment-663270816_
Are you saying that EF Core causing LOH allocations are of no consequence?
@windhandel EF Core has no control over what gets allocated into LOH. If you think the behavior you are seeing is incorrect open an issue on https://github.com/dotnet/runtime
@AndriySvyryd without symbols loaded into PerfView for the necessary DLLs, you can't tell the source of the allocations.
Without identifying the source, you can't say whether this issue is or isn't caused by EF Core.
As a developer, you have complete control over whether things like byte arrays are allocated into the LOH or whether they use a more relevant data type such as Span < T >.
Without symbols loaded into PerfView for the necessary DLLs, you can't tell the source of the allocations.
Correct, as I've said before we would need a repro project to investigate this
As a developer, you have complete control over whether things like byte arrays are allocated into the LOH or whether they use a more relevant data type such as
Span<T>.
Data structures used when reading the data returned by a query are managed by the ADO.NET providers, if you are using SQL Server this would be Microsoft.Data.SqlClient. That's why I suggested creating a repro without EF Core.
As a developer, you have complete control over whether things like byte arrays are allocated into the LOH or whether they use a more relevant data type such as Span < T >.
No, you don't have control over where things are allocated, the runtime has rules that it follows. If you create a large object then it'll be put on the LOH, end of story. Spans are pretty irrelevant here because they'd have to point at something and that something would still be a heap allocated byte array unless you go really hardcore and write your own MemoryManager implementation.
Reading through the original thread it seems you were unable to explain the discrepancy between the allocated bytes as reported in management studio and the perfview trace you saw. I probably can if you can provide an isolated reproduction that I can profile and investigate. It is highly unlikely that EF is doing anything with large objects because that just isn't the sort of thing it deals with. As @AndriySvyryd points out that's much more likely to be the SqlClient driver. I've made numerous performance enhancements to that library and if, as already stated, you can provide me with an isolated reproduction of your problem I can investigate and possibly identify further improvements.
Thanks, @Wraith2, my point on the LOH was that the developer has complete control over whether the objects they create are large and what the duration/durability of such an object is (whether it's pooled or not). I'm aware of the 85k threshold, that's what we must work around to create optimizations that don't drastically impact performance.
I'll try to create a repro with one of the pre-existing samples for the previous issue and get back to you.
Kind regards,
Jeff Fischer
my point on the LOH was that the developer has complete control over whether the objects they create are large and what the duration/durability of such an object is
This is not true. If you ask the SQL driver for a 2meg chunk of text or binary then it has to be a contiguous allocation to fulfil your request. That's going to go on the LOH.
Now if you want to use the driver in a way that avoids those allocations then use streams in sequential mode, call GetChars() etc then you can avoid a lot of them but that's the user code's responsibility.
Now if you want to use the driver in a way that avoids those allocations then use streams in sequential mode, call GetChars() etc then you can avoid a lot of them but that's the user code's responsibility.
Using SequentialAccess is tracked by https://github.com/dotnet/efcore/issues/885
@AndriySvyryd , @ajcvickers , @roji : I apologize for my tone on the prior thread. I wasn't intending to be argumentative and confrontational. Although, in hindsight, I can completely see how it came off that way.
@Wraith2 what you're saying about the driver does not negate my statement. Both can be true, my statement is always true. Developers always have control over their allocations.
However, if what you're saying is true, then possibly what we're discussing is a deficit in the driver itself for allocating memory in a way that is non-performant and do not necessarily ultimately meet performance requirements (assuming we require them as such).
It's best to delineate the degree to which an issue is difficult to resolve and the truthyness of the cause.
I think what you're saying is: "If this issue goes back to the driver, it will be difficult to resolve." That makes a lot of sense, but doesn't change the cause.
Lets get that test case and see what the numbers are like then we can talk specifics wherever is most appropriate.
@Wraith2 - that makes sense. I'll try to have something to you by EOW. Thanks for the help.
EF Team Triage: Closing this issue as the requested additional details have not been provided and we have been unable to reproduce it.
BTW this is a canned response and may have info or details that do not directly apply to this particular issue. While we'd like to spend the time to uniquely address every incoming issue, we get a lot traffic on the EF projects and that is not practical. To ensure we maximize the time we have to work on fixing bugs, implementing new features, etc. we use canned responses for common triage decisions.
Most helpful comment
No, you don't have control over where things are allocated, the runtime has rules that it follows. If you create a large object then it'll be put on the LOH, end of story. Spans are pretty irrelevant here because they'd have to point at something and that something would still be a heap allocated byte array unless you go really hardcore and write your own MemoryManager implementation.
Reading through the original thread it seems you were unable to explain the discrepancy between the allocated bytes as reported in management studio and the perfview trace you saw. I probably can if you can provide an isolated reproduction that I can profile and investigate. It is highly unlikely that EF is doing anything with large objects because that just isn't the sort of thing it deals with. As @AndriySvyryd points out that's much more likely to be the SqlClient driver. I've made numerous performance enhancements to that library and if, as already stated, you can provide me with an isolated reproduction of your problem I can investigate and possibly identify further improvements.