Version: 2.0.0-dev.65.0
The following script consumes ~200MB of system memory during runtime:
import 'dart:async';
Future main() async {
await new Future.delayed(Duration(seconds: 6000));
}
Adding the following import of 'package:analyzer', Observatory reports that the following script peaks at ~800MB of memory consumption, and settles down to ~700MB.
import 'dart:async';
import 'package:analyzer/analyzer.dart';
Future main() async {
await new Future.delayed(Duration(seconds: 6000));
}
Triggering a manual garbage collection event has no impact.
cc @mraleph @dgrove
in Dart 2 mode the VM runs the front end in an isolate called the 'kernel isolate'. This isolate runs the front end code and compiles sources into the kernel format. When you add the package:analyzer all of this code is compiled by the front end and it keeps some internal state around to support incremental compilation (used in hot reload). The Dart heap of this kernel isolate accounts for this big jump in the
memory footprint of the process that you are seeing.
You can use 'dart --print-metrics test.dart' with and without package:analyzer to see the dart heap sizes of the kernel isolate
https://github.com/dart-lang/sdk/issues/32953#issuecomment-384006430
currently VM completely discards CFE state between URI loading requests because it has no way of knowing whether there will ever be another URI loading request that might benefit from the cached state - and discarding the state frees significant chunk of memory.
Has this changed? What benefit are non-flutter users seeing from this memory overhead?
I am not sure I understand the relevance of that comment in this context. The comment talks about spawning another isolate, i.e current isolate dies and another isolate is spawned. Yes in that case the state in the front end for the first isolate is discarded completely.
Flutter users do not see this overhead as the front end does not run in the VM for flutter.
The benefits for command line Dart VM users are
If the command line offering has a wrapper like 'flutter tools' then the CFE could be decoupled from the VM and this overhead would not exist. It would also mean the uri specified for spawnURI should point to a 'kernel file' and not to the sources.
I am not sure I understand the relevance of that comment in this context.
The context is that my understand was that we _weren't_ holding memory for this type of thing which made the feature request intractable. Now I'm learning that we _are_ holding memory, but it can't be used across isolates, only within an isolate?
ability to run from source
Why does the memory need to be held after parsing has completed and we're running for this use case?
ability to hot-reload app
Do we have documentation on how do do this with the command line VM outside of flutter? Would it be possible to disable this feature and buy back some of the memory?
The context is that my understand was that we weren't holding memory for this type of thing which made the feature request intractable. Now I'm learning that we are holding memory, but it can't be used across isolates, only within an isolate?
Yes it is not currently used across isolates. Each isolate gets it's independent state.
Why does the memory need to be held after parsing has completed and we're running for this use case?
It is held so that we can do an incremental compilation of just the changes when doing hot reloads
Do we have documentation on how do do this with the command line VM outside of flutter?
The observatory debugger has a 'reload' command and similarly IntelliJ and VSCode have reload commands that allow you to use this feature. In addition the 'expression evaluation' inside the debugger is also done against this state.
Would it be possible to disable this feature and buy back some of the memory?
We have a build option that produces a version of the VM without any of the front end code in it. This version of the VM can only input 'kernel files'. This version can be built using these steps
tools/gn.py --exclude-kernel-service -mproduct -ax64
tools/build.py -mproduct -ax64 runtime
If I understand correctly, there is nothing special about package:analyzer, other than it is large?
@a-siva To me it seems that most users of the command line VM actually don't do hot reloads. It seems that only our tests do that currently - the only external way to initiate hot reload is to send a vmservice request, which is not wired to any IDE outside of Flutter. So it seems we are just wasting memory if you are running from source. The original implementation of kernel-service did only one-shot compilations and completely discarded compiler state between runs, I think we should go back to this. We should be able to create an incremental compiler lazily when we need hot reload.
We also seem to start kernel-service very eagerly - even when running from the snapshot, which seems like a waste too.
Additionally we should probably shutdown kernel-service after it was used after some timeout - because most command line usages from source only need it once.
@natebosch The comment you are quoting was based on my knowledge of how kernel-service was implemented originally. I completely missed that at some point we started keeping the whole compiler instance around (which does seem wasteful resource consumption-wise).
Agreed with @mraleph that compiler state should be discarded immediately once it is done with the first run. The memory overhead is not justifiable given that hot reload isn't supported in practice outside of flutter.
Is it possible to simply make the current functionality be hidden behind a flag, --enable-hot-reload or something similar?
I like the flag approach, and that it is off by default. I think argument can be made that hot reload is a 'development' only option. This is opaque to Flutter users because the rest of the tooling manages the flag. VM developers can opt in, but the majority of use cases are going to be 'production' command line invocations. Requiring a user of a Dart CLI to both know about and use an opt-out flag would be hard to communicate to users.
I went and checked - now starting a sample app Aqueduct takes 2 GB of memory.
In observatory I can see application 12 isolates having heaps around 10mb each and a kernel isolate with 1.4GB old-space.
We definitely need to reconsider the current setup with kernel-isolate - because all batch mode applications will severely suffer from this including our own batch mode tools like analyzer, build_runner, DDC workers, etc.
Marking this P0 and assigning Dart 2 milestone.
I am tentatively assigning to @a-siva because it is P0 for triage.
@a-siva To me it seems that most users of the command line VM actually don't do hot reloads. It seems that only our tests do that currently - the only external way to initiate hot reload is to send a vmservice request, which is not wired to any IDE outside of Flutter. So it seems we are just wasting memory if you are running from source. The original implementation of kernel-service did only one-shot compilations and completely discarded compiler state between runs, I think we should go back to this. We should be able to create an incremental compiler lazily when we need hot reload.
That is not true the observatory debugger has the mechanism to do hot reloads and it works today, I am surprised that IntelliJ does not have this wired.
We also get a fairly good amount of hot reload scenario testing which we don't get in other cases.
I like the flag approach, and that it is off by default. I think argument can be made that hot reload is a 'development' only option. This is opaque to Flutter users because the rest of the tooling manages the flag. VM developers can opt in, but the majority of use cases are going to be 'production' command line invocations. Requiring a user of a Dart CLI to both know about and use an opt-out flag would be hard to communicate to users.
I like this suggestion which specifies the VM is running in 'production' mode and no 'development' support is needed.
Lowering priority to P1 as I don't think this is a P0.
Is this really required for Dart2Stable, or could it wait for a patch release?
Is this really required for Dart2Stable, or could it wait for a patch release?
The memory usage impact is definitely felt by our users and could block some from upgrading.
Also, if we move functionality to behind a flag it would be technically breaking.
I think this could wait for post-Dart2Stable (I don't think it changes the
timeline for the flag, just whether or not it's blocking the release).
On Sat, Jul 7, 2018 at 10:18 AM Nate Bosch notifications@github.com wrote:
Is this really required for Dart2Stable, or could it wait for a patch
release?The memory usage impact is definitely felt by our users and could block
some from upgrading.Also, if we move functionality to behind a flag it would be technically
breaking.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/dart-lang/sdk/issues/33658#issuecomment-403230496,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACAsW8BFLVljcm86mhZ1doMpbXqZqU8Fks5uEO1WgaJpZM4U6Dsa
.
@dgrove in the current form it increases memory consumptions of apps that use isolates spawned from source _by hundreds of megabytes per isolate_ - and this memory cost is completely opaque to the user. Given that the fix is _very_ straightforward and that impact is severe I think it is fair to do this before Dart 2 stable.
+1 to the proposed solution of having an off-by-default flag for enabling the hot reload behavoir.
+1 to keeping this issue in stable given the huge regression in memory usage.
OK, let's leave this in Dart2Stable.
@a-siva are you the right assignee for this?
Can this be closed now that _<wrong link>_ has landed?
Most helpful comment
Agreed with @mraleph that compiler state should be discarded immediately once it is done with the first run. The memory overhead is not justifiable given that hot reload isn't supported in practice outside of flutter.
Is it possible to simply make the current functionality be hidden behind a flag,
--enable-hot-reloador something similar?