In a nutshell - since in Android application loading time is one of the areas that need improvement,
we've made a POC implementation that takes advantage of V8's startup snapshots feature to see what may be achieved with it. The results are quite promising and we may gain literally more than 1 second ! of an improvement by saving all the modules.
Due to the V8 API specifics, we need to bundle the entire modules JS into one single file and pass it to the V8::CreateSnapshotDataBlob method. What V8 does when making the snapshot is to parse, compile and run this script into a new Context and then to save the state of the heap into a binary representation. Then, upon next application runs, this binary file may be used to load the whole representation of the modules directly within memory.
Here is my proposal for taking advantage of this feature:
The snapshot is CPU-architecture dependent. Hence, if we want to distribute pre-generated version of the snapshot we will need to package three files, saved against the three available architectures - armeabi-v7a, x86 and arm64-v8a. The average size of one file is ~3 MB but it compresses very well and an archived version is about 400 KB.
This is the most efficient way performance-wise. It adds further optimization by skipping the extraction of numerous JS files initially
tns-core-modules and tns-core-modules-snapshot. Or we may package all into one package.[x] Enable the Android Runtime to consume such a BLOB directly, depending on the current CPU architecture. We may use the same convention as for the native part of the Runtime itself, for example:
โโโ snapshot
โ โโโ armeabi-v7a
โ โ โโโ snapshot.blob
โ โโโ arm64-v8a
โ โ โโโ snapshot.blob
โ โโโ x86
โ โ โโโ snapshot.blob
Please allow the CLI to be able to (optionally?) generate the BLOB also; if I am bundling a dozen plugins; their is no reason why I shouldn't be able to get the speed enhancement by creating my own BLOB's with all my modules on Android.
Snapshot generation can be generated on pc or have to be on device? If have to be on device, for start maybe the developer can manually use command or put some kinda function to generate it then send it back for deployment
If this is greatly improve things up, even if this is the only way, i will still be happy :)
Can this be extended to put the actual app code inside the blob?
I would suggest investigating whether an alternative to v8 with multiple execution tiers such as JavaScriptCore, SpiderMonkey or ChakraCore would obviate the need to jump through heap snapshotting hoops - over in iOS land we are very happy with our startup times since bootstrapping the JavaScriptCore interpreter is way faster than bringing up the optimizing JIT tier. Since v8 lacks an interpreter, it takes longer before it can execute JavaScript code because it has to JIT compile it, right?
@fealebenpae - v8 has multiple JIT tiers and an interpreter. I don't know for sure, but from what I've seen the issue is not bringing up v8 so much as the android/metadata side of things. This just add an additional speed enhancement. Can @atanasovg or someone post where the startup time is at?
Using JSC "might" (and that is a big might!) be worth while; because it would mean both platforms would be using the same JS engine. However, you would have to re-tool several things like the Developer tools would have to be re-worked to work from JSC rather than v8. Safari isn't supported on Linux or Windows so you have to have something on both those platforms to debug with.
@atanasovg Do you use same approach as Atom's https://github.com/atom/node-mksnapshot?
Can you provide some code sample using this method with NS project?
Hello, guys. I'm going to update the issue with the results of a more recent research we did on V8 heap snapshots. Feel free to ask any questions, if something is missing or not quite clear to you.
Here are the startup times of {N} Angular on a Nexus 5 device. Only second runs are included, in release configuration.
| Run | Startup time |
| --- | --: |
| Non-bundled | 3400ms |
| Bundled | 3000ms |
| Bundled and snapshotted | 2800ms |
| Bundled and snapshotted and evaluated | 1550ms |
| Bundled and snapshotted and evaluated and static bindings | 1450ms |
| Run | Startup time |
| --- | --: |
| Non-bundled | 4000ms |
| Bundled | 3600ms |
| Bundled and snapshotted | 3400ms |
| Bundled and snapshotted and evaluated | 2300ms |
| Bundled and snapshotted and evaluated and static bindings | 2100ms |
Snapshot size is about 12-16MB (4-6MB when the script is minified) per architecture.
require calls and JavaScriptImplementation annotations from snapshot:<embedded script> (<embedded> in V8 5.1) and these classes are not found in runtime leaving it to dynamic binding generation.CPUFeatures::Probe(bool cross_compile) with the cross-compile flag enabled. We should export this class.NULL on error.V8_ANDROID_LOG_STDOUT).android and java namespaces when minifying.WarmUpSnapshotDataBlob() API is behaving. It seems that it is introducing enum FunctionCodeHandling { CLEAR_FUNCTION_CODE, KEEP_FUNCTION_CODE } in the serializer options. This should probably now include compiled code and would make the snapshots even faster. There is some progress here.nativescript-angular-snapshot prototype with the existing nativescript-angular plugintns-core-modules packagerequire override. The modules included in the bundle resolve to the bundled one, but other external modules resolve the module on the file system. This occurs with the reflect-metadata polyfill when running the ng-todo sample.require('../../') calls that should be normalized in the require override and be resolved from the bundle. This occurs with the extension methods of rxjs when running the groceries sample.Ping @atanasovg, @KristinaKoeva.
P.S. On the regards of encryption, the heap snapshot turns out to be a poor shot, because all of the JavaScript source seems to be included inside the snapshot data blob.
@jasssonpet Just a FYI, the actual NS runtime (not just angular) has some issues with minifications in the styling system; I ran into this with my NativeScript-Protect; which is one of the reasons I disable minification by default in my encryption system. I suspect the reason why (I haven't traced it fully down yet; since that isn't the purpose of NS-Protect; but because I was curious) and based on some initial tests it looks like the issue is because of the typeName gets set to the wrong name from types.getClass because the minification changes the class names... This might be the same issue you are seeing with Angular...
Good to know that SnapShots won't eliminate the need for my NS-Protect encryption. ;-)
The repo can be found here: https://github.com/NativeScript/android-snapshot
Is the loading performance improvement already in 2.0?
@x4080 This is not enabled by default for now, but we are looking for ways to do so in the next release.
I see, so in the NS demo app it still not using it yet, I guess ?
@x4080 Not yet.
Once we use it there, you can expect the app to start a whole lot faster ๐
Alright then
I can confirm that the mksnapshot tool from V8 when cross-compiled for ARM successfully generates ARM snapshots with the V8 ARM simulator from the host machine, without the need for any ARM devices or emulators :fireworks:
@jasssonpet Do we have instructions anywhere on how to use this now and/or any gotchas?
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Most helpful comment
I can confirm that the mksnapshot tool from V8 when cross-compiled for ARM successfully generates ARM snapshots with the V8 ARM simulator from the host machine, without the need for any ARM devices or emulators :fireworks: