Such as arena, user-defined heaps/regions, reference counting, manual memory management, new GC technologies like Java’s ZGC/Zing(Azul C4) and so on. Are they in GC team's plan?
/cc @Maoni0 @VSadov
we have already tried some of these and they have not shown proven benefits in our environment which was why we did not actually ship them. for example, it was difficult for arena to work because due to the way our framework is written there was simply too much "leaking" from the arenas (too many references to the arenas from the GC heap).
and yes, we do have a concurrent compacting GC on our roadmap which will be used for very low latency scenarios. obviously we will continue to improve the current GCs we have as they are applicable in a wide range of scenarios.
@Maoni0
I have contributed some ideas about improving GC or reducing heap allocation. Please have a further look and consideration.
AWE provides a very fast remapping capability. Remapping is done by manipulating virtual memory tables, not by moving data in physical memory.
Escape Analysis: https://github.com/dotnet/runtime/issues/4584. I think that C# compiler should do simple escape analysis, thus the generated IL code would have already been optimized, so JIT need not do this expensive work. Besides, AOT compiler should do full escape analysis, because it can see all the code to compile.
Optimize the reclaim time of objects with finalizer: https://github.com/dotnet/runtime/issues/4613. I think that the current behavior is close to a bug, because it delays the reuse of freed memory for no reason. What I proposed should be considered as a way to fix the "bug", rather than an optimazation.
My newest proposal: https://github.com/dotnet/runtime/issues/33960. Allow stackalloc object array.
@ygc369 To make a particular proposal actionable, I would suggest studying the scenario in more detail and provide examples and measurements that demonstrate the viability and improvements.
Ideally it would be a PR with a prototype of proposed changes. A smaller scale mock up could be convincing too.
It is very common that an idea is not practical when more details are considered.
Without an evidence that a feature has a good potential for CLR, listing ideas as issues does not add a lot to what is known from books and research papers.
Any news on CoreCLRs local GC? This talk by @kkokosa about custom GCs states that this whole topic is very premature still. For example, the abstracted interface each GC has to implement is pretty tied to the current default GC. Also, theres not really much documentation about how to implement such a custom GC.
I imagine that local GCs indirectly can solve some issues @ygc369 is concerned about. What is the status on that @Maoni0, @VSadov?
I wouldn't say it's "pretty tied to the current default GC", but rather, the VM side has certainly expectations of the GC just because GC has existed in the runtime for many years. there's definitely merit in making this easier. I think there needs to be specific requirements coming from folks who are seriously thinking of using coreclr to experiment with different GC techniques so we can work with them to making our runtime into a more friendly environment for such experiments. I know research folks who are interested and have pointed LocalGC to them. my expectation is they will let me know what more needs to be abstracted. meanwhile if you are interested, please file a separate issue and we can work with you on specific issues. as far as documentations go, I think @kkokosa did a pretty good job in his blog explaining this. we could look into adopting some of that into our docs.
I would love to participate in such LocalGC "work group" and indeed I was thinking about writing/improving some docs, maybe it's high time I did it. If only the day had more hours...
From my perspective the most important missing part now is 'object scanning API' mentioned by me in https://github.com/dotnet/runtime/issues/12809 and I have PR to prepare regarding to it.
great! let me sync up with the research folks and see where they are at and I will keep you posted.
:openjdk-mips-interest
Most helpful comment
we have already tried some of these and they have not shown proven benefits in our environment which was why we did not actually ship them. for example, it was difficult for arena to work because due to the way our framework is written there was simply too much "leaking" from the arenas (too many references to the arenas from the GC heap).
and yes, we do have a concurrent compacting GC on our roadmap which will be used for very low latency scenarios. obviously we will continue to improve the current GCs we have as they are applicable in a wide range of scenarios.