I'm pretty sure this was just an oversight but, when you run the _Measure-Command_ it measures its own speed of time it takes to run plus the time it took to run the command within its brackets. I don't think this was intentional
An exmaple is:
Measure-Command {$null = $(1..10000)}
TotalMilliseconds : 7.0217
but if you look at how long the Measure-Command to run by its self:
Measure-Command {}
TotalMilliseconds : 6.8096
Which would mean the number desired by the user would be from the following command:
(Measure-Command {$null = $(1..10000)}) - (Measure-Command{})
TotalMilliseconds : 0.461
As can be seen from the pastebin link: Measure-Command powershell it doesn't always have the same run time if you want to konw why I didn't just give a Total-milliseconds isn't equal to 7.0217 - 6.8096
Very simple solution. Calculate it differently so that you know how long your command is taking to run and not how long your command is running plus the Measure-Command command-let cause that isn't what the programmer using Powershell wants.
Impliment something like this: (Measure-Command {$null = $(1..10000)}) - (Measure-Command{}) but, in the Powerhshell source because you can't be sure how long the Measure-Command will run for.
It could run for anywhere between 8.2262 milliseconds and 5.9224 milliseconds. So something that will accurately show long your code is running would be good.
To check how this works run Measure-Command {} 8 or 10 times with no command in the brackets to be measured. It measures its self.
No. I've not checked if this is how the Measure-Command works Powershell 7.
I've only checked in Powershell 6.
See the cmdlet implementation:
https://github.com/PowerShell/PowerShell/blob/12425f24c0c896661a715445647e26e74050d01d/src/Microsoft.PowerShell.Commands.Utility/commands/utility/TimeExpressionCommand.cs#L61-L71
As you see the cmdlet doesn't measure itself.
@whitequill, in PowerShell there is inherent, non-trivial overhead in invoking even just {} (an empty script block), and that's what you see.
Generally, Measure-Command measures _wall-clock_ time of executions, which means that it is susceptible to how busy your system is overall.
It is _not_ a high-resolution profiling tool that measures just the given code's CPU time.
Pragmatically speaking, it makes sense to ensure that your system is not too busy with other tasks and to average _multiple_ runs of your code to get a more realistic sense of execution time, but note that caching and JIT compilation can then distort the results.
In short (but perhaps people better versed in performance measurement and with a deeper knowledge of PowerShell can chime in; @SeeminglyScience?):
Don't look to a scripting language for consistent, predictable performance; in the context of PowerShell, Measure-Command's current behavior seems adequate.
However, there is one notable pitfall that predictably makes Measure-Command report _longer_ execution times:
& { ... } [update: it takes many local-variable accesses for this to make a difference, though - see next comment]Don't look to a scripting language for consistent, predictable performance; in the context of PowerShell,
Measure-Command's current behavior seems adequate.
More specifically a lot of PowerShell is dynamically resolved (commands, methods, types, basically everything) so the engine has a ton of different caches for these results. Often the first run of anything will be slower than subsequent runs. On top of that, in most cases every script block is interpreted for a certain number of runs before it's compiled into CIL byte code. That portion of code will be slower until it is finally compiled. I don't know off the top of my head if the new tiered JIT compilation applies to dynamic methods, but if it does that's another layer of eventual optimization.
Measure-Command can give you a very rough idea but that's about it. It's not impossible for the engine to support more reliable performance profiling, but even then it won't be as simple as Measure-Command. Understanding a snippets real performance at various points in it's lifetime will likely be a rather involved task, even more so then something like BenchmarkDotNet.
However, there is one notable pitfall that predictably makes
Measure-Commandreport _longer_ execution times:
- It executes the given script block in the _caller's_ scope rather than in a _child_ scope, which slows down execution - see #8911; you can work around this by using a nested script-block call with
& { ... }
Yeah. Though it's worth mentioning that it's unlikely to manifest in OP's example. Typically the difference is around local variable access. Even then it needs quite a few accesses (like 10k+) to even be measurable iirc. It mostly affects folks trying to measure micro benchmarks with an artificially inflated scale. It's real easy to accidentally leave in some locals in one test that completely throw off the results.
This issue has been marked as answered and has not had any activity for 1 day. It has been closed for housekeeping purposes.
Most helpful comment
More specifically a lot of PowerShell is dynamically resolved (commands, methods, types, basically everything) so the engine has a ton of different caches for these results. Often the first run of anything will be slower than subsequent runs. On top of that, in most cases every script block is interpreted for a certain number of runs before it's compiled into CIL byte code. That portion of code will be slower until it is finally compiled. I don't know off the top of my head if the new tiered JIT compilation applies to dynamic methods, but if it does that's another layer of eventual optimization.
Measure-Commandcan give you a very rough idea but that's about it. It's not impossible for the engine to support more reliable performance profiling, but even then it won't be as simple asMeasure-Command. Understanding a snippets real performance at various points in it's lifetime will likely be a rather involved task, even more so then something likeBenchmarkDotNet.Yeah. Though it's worth mentioning that it's unlikely to manifest in OP's example. Typically the difference is around local variable access. Even then it needs quite a few accesses (like 10k+) to even be measurable iirc. It mostly affects folks trying to measure micro benchmarks with an artificially inflated scale. It's real easy to accidentally leave in some locals in one test that completely throw off the results.