It would be useful for some work I'm doing to be able to inspect cpu and memory usage for a vm script execution. A common use case may be using vm for micro-benchmarking code and spot leaks during testing.
An example could be:
const script = vm.runInNewContext('count += 1; name = "kitty"', sandbox);
script.memoryUsage(); // => similar to process.memoryUsage()
script.cpuUsage(); // => similar to process.cpuUsage()
My questions are:
1) Is it possible?
2) If yes, how would be the best way? Could it be an extension to the current vm's node api (as suggested by the example)?
3) If yes, a PR would be considered?
4) If yes, would I be able to help with that? I am not familiar with the node code (especially the v8 part) but if we would agree about having this feature and its feasibility, I think I would be happy to help if needed, perhaps with some mentoring and/or guidance.
Thanks.
I don't think this is possible. You're probably better off using a separate process.
This is indeed impossible with the current design. Even if you use a new context, the code will still be executed within the same thread, making CPU and memory usage indistinguishable. However, this allows to use process.cpuUsage() for synchronous code:
var cpuBefore = process.cpuUsage();
vm.runInNewContext('var now = +new Date(); while(+new Date() < now + 500);');
var cpuAfter = process.cpuUsage(cpuBefore); // about 500ms
As long as your CPU is mostly idle, synchronous operations will most probably use an entire logical core. If the code is aynchronous, CPU usage will usually drop to 0% as the processor will be idle most of the time, waiting for some event to happen.
Roughly the same applies to memory usage, but remember the difficulties of memory usage associated with the garbage collector.
Unfortunately, most of the use cases I need to handle at the moment include asynchronous code (that's why spinning up a process to replace each vm execution is also expensive to guarantee expected behaviour in case of highly asynchronous scenarios - but possibly the only way).
If we are sure about this, I think we can close.
Thanks for your answers and your time.
I’m closing this, but one last thing: V8 actually has an API for letting you estimate how much memory a vm context retains, but that API is in the process of being removed right now.
So, that part would actually be doable, just not in an even remotely future-proof way.
That's very interesting and useful to know @addaleax. I would like to learn more about v8 and perhaps about this specific issue, would you be able to point me to some resources to know more about this specific API?
Thanks
@matteofigus Grep the v8.h file for EstimatedSize() – but seriously, in V8 5.9 it will just always return 0 and soon enough it will be gone completely, so please don’t start using it. ;)