I don't think doing that by default is a good idea. Writing objects to the pipe while measuring would change how that script block performs.
You might think we could turn off the stopwatch while writing objects to downstream pipes, but downstream code that isn't measured could still have a significant impact on the code being measured.
Format and output is not always efficient. It may allocate a lot of memory. We could trigger garbage collection in ways that make measuring the performance of the script block harder.
Given this, I think it's reasonable to require Out-Host
if you really want to measure the impact of formatting.
All that is true and why we wouldn't want to change the default behavior but it seems reasonable to have a SWITCH parameter would would enable this.
I've wanted to do this myself a number of times.
@be5invis it would be great if you can provide more details, i.e. the motivation example.
My best guess is that request is not about a downstream pipeline, but about native commands.
Measure-Command { ping ya.ru }
currently, the output of ping is collected and not passed down.
The reason I think that's the request is because the original title said STDOUT and STDERR, so I immediately think about native executables.
UPD: note that stderr is already behaves as requested, and the behavior for stdout can be achived with
Measure-Command { ping ya.ru | Write-Host }
So something like a -PassThru
parameter? In that case it's probably beneficial to enable some way of writing the measurement results to a variable somehow, lest they are to appear with the output of the measured command (which would kinda defy the purpose).
Measure-Command takes ValueFromPipeline. So the general approach is:
1,2,3,4,5 | where { largescriptblock } | foreach {...}
1,2,3,4,5 | measure-command -MeasureVariable outArray -PipeExpression { where {largescriptblock} } | foreach {...}
A vote to including the output as a standard operation. I'd suggest we're over-focusing on the timing, and losing the context of what's being attempted. I've got a series of commands I'd like to time, because "something isn't looking right". If I don't have the output, I can't tell what's happening that might have caused a delay. While I appreciate measuring might further add to the amount of time, it's what we need to be useful. Please don't make me do some fancy stream output. Really, I just want my command to be wrapped with a start time, and end time, with the execution time. Perhaps put the timings in a green color.
measure-command {docker build} -o true -h "docker build"
-o true meaning enable the output
-h is a header that gets included in the output
The other SteveL (SteveLas-MSFT)
Guys, you must be joking. Without showing the stdout and stderr while measuring, this command is largely useless. For a moment I rejoiced I finally found a way to measure a command execution time native to Windows, only to find that I cannot use it. I need to see the output of a command. I can hardly think of use cases where I would want to suppress stdout and stderr. This is just madness. Please, bring the output back to screen.
@dobsa: Not that I disagree, but Measure-Command { <command> | Write-Host }
as suggested by @vors is imho good enough (but really needs to be documented).
A better idea is | Out-Default
as it will not affect unicode symbols unlike | Write-Host
. And it's already in the docs apparently!
I've sent a PR to make it easier to find there: https://github.com/MicrosoftDocs/PowerShell-Docs/pull/4783
A better default is to dispaly output on screen without the need so specify | Out-Default
or | Write-Host
. In most of the cases you want to see the output, not to suppress it.
Unfortunately, piping the output strips color :(
This is not just about interactive use cases, there's a use case for using Measure-Command
in scripts too. There Out-Default
won't help you because it doesn't allow to further use the pipeline output for processing. I guess you could save it in a variable but then it's not streamed anymore, which can totally skew the numbers (e.g. imagine the pipeline is limited with Select-Object -First
further down).
Can we just have a -PassThru
switch?
I vote for a _new switch_ that would result in getting the exact same output (color included ;-) ) as the raw command being timed.
Changing the default behavior is probably a bad idea, for backwards-compatibility's sake.
That's a fairly sensible suggestion. I don't see why we can't Measure-Command -PassThru
Just to be clear, Measure-Command -PassThru
would emit the objects that are output from the scriptblock AND write the measurements to the host?
Hmm. I think it'd be best if it either:
Data
and Duration
, orDuration
or something along those lines.@vexx32 that all seems pretty complicated compared to just saving the output to a variable no?
Measure-Command { $data = Invoke-MyTest }
vs
$result = Measure-Command -PassThru { Invoke-MyTest }
$data = $result.Data
$result.Duration
@felixfbecker
I guess you could save it in a variable but then it's not streamed anymore, which can totally skew the numbers (e.g. imagine the pipeline is limited with
Select-Object -First
further down).
Why wouldn't the Select-Object -First
be inside the Measure-Command
scriptblock though? If you're trying to determine how long a code snippet takes to run, do you really want code downstream to alter it? Also as @SteveL-MSFT brought up, if you did have Select-Object -First
downstream then you wouldn't get the execution length.
Just to be clear, Measure-Command -PassThru would emit the objects that are output from the scriptblock AND write the measurements to the host?
This is an exhaustive answer why we should not implement this request. We can always get the result with:
Measure-Command { $data = Invoke-MyTest }
"Getting the result" (pipeline data) is not the same as "getting the same results" (display).
This is an exhaustive answer why we should not implement this request. We can always get the result with:
Measure-Command { $data = Invoke-MyTest }
No, we can't always do that. This may be possible in most scripts, and maybe even every script you wrote personally, but it's definitely not possible in all scripts. I very recently ran a complex script that processed GB of data through the pipeline. It is simply not possible to save the pipeline output in a variable first, because it would exceed the machine's memory. That is what actually led me to this issue.
A minor downside is also that it's hard to statically analyze, e.g. VS Code will complain that $data
is not used because it can't know that the script block is running in the parent's scope.
The argument that downstream consumers can slow down the pipeline is a valid one of course, and it can mean that the measured time does not represent _only_ time spent in the script block, but also time spent by the script block waiting because of back pressure. But you don't know if that is actually going to have an impact. For example, in most cases, the "downstream consumer" will just be the host formatting + output, or maybe some light processing of the data. If what we are measuring involves IO, that time will be negligible, and as a user I would like to have the option to ignore it and tell PowerShell to just pipe it through anyway instead of requiring me to rewrite the whole script to not use pipelines (which again, may not even be possible due to memory constraints).
_Ideally_ of course there would be a way to get the real time spent by the script block. I don't know if this is technically possible, but maybe it is, if the Measure-Command
cmdlet has a steppable pipeline of the script block, maybe it could run the timer only when the script block is running, and pause it while downstream consumers are processing the emitted items. It could report both times, similar to how Unix's time
reports multiple times. But even if this is not technically possible, it would still be useful to have -PassThru
for the majority of the use cases.
Since I mentioned Unix's time
: time
forwards all output too and doesn't swallow it. This means this is also yet another thing that's surprising for people using PowerShell on platforms other than Windows.
Answering to more questions from above:
Why wouldn't the Select-Object -First be inside the Measure-Command scriptblock though? If you're trying to determine how long a code snippet takes to run, do you really want code downstream to alter it?
Yes, in some cases that's exactly what I want to do. Imagine there's a cmdlet that does REST API calls with -FollowRelLink
, i.e. it will paginate potentially forever. Limiting it at some point is _essential_, and as a user I am aware that it _would_ take longer if I didn't limit. I am fine with the limiting being taken into account for the execution time. I am asking to give the user the _option_ to make that work. But also keep in mind that as said, in most cases, the "downstream consumer" will simply be light processing + host output.
Just to be clear, Measure-Command -PassThru would emit the objects that are output from the scriptblock AND write the measurements to the host?
I think that would be sensible, similar to how -IncludeTotalCount
adds a TotalCount object to the output. It could be easily filtered out and redirected to e.g. the information stream in a ForEach-Object
. But adding a -MeasurementVariable
parameter also sounds like a good idea. There is definitely a use case for processing the measurement further as a TimeSpan
object instead of writing it to the host, e.g. finding out what percentage of a total time a step took.
Yes, in some cases that's exactly what I want to do. Imagine there's a cmdlet that does REST API calls with
-FollowRelLink
, i.e. it will paginate potentially forever. Limiting it at some point is _essential_, and as a user I am aware that it _would_ take longer if I didn't limit. I am fine with the limiting being taken into account for the execution time. I am asking to give the user the _option_ to make that work. But also keep in mind that as said, in most cases, the "downstream consumer" will simply be light processing + host output.
But then that's part of the test, so why wouldn't that be included in the script block? Why is it necessary to do this:
Measure-Command { Invoke-LongCommand } | Select-Object -First 1
Instead of:
Measure-Command { Invoke-LongCommand | Select-Object -First 1 }
You can easily do that in an example code snippet on GitHub, but that doesn't mean it's easily possible in a complex script. We could be talking about dozens of layers of function calls across multiple files, where somewhere there's the Select-Object
, and somewhere you want to Measure
. You can't easily restructure all the scripts and functions just to move that in there. The user should have the option to avoid that.
You wouldn't need to restructure anything, just wrap it in a script block:
Measure-Command {
& {
# complicated code here
} | Select-Object -First 1
}
Measure-Command
shouldn't measure things outside of what you tell it to. If you want to change how it's measured, change the test, not the circumstances around it imo.
@felixfbecker Measure-Command
cmdlet is not designed for scenarios you ask. The cmdlet is commonly used to measure simple script block in interactive session.
For more complex scenarios we could discuss injecting performance counters. They can provide more reliable information about script execution and more importantly for hosted applications too.
Just going to add my voice to the conversation,
I would also love a -PassThru
flag on this command. It would indeed output the output from the script and it's own measurement output.
Since I mentioned Unix's time: time forwards all output too and doesn't swallow it. This means this is also yet another thing that's surprising for people using PowerShell on platforms other than Windows.
I was surprised it didn't already have this functionality, I guess I am more familiar with time
then this command still. Also, I agree with previous remarks about the common use case requiring devs to see the output of the command they're measuring and it's far more rare that I don't care about the output (at least, in my usage of this command)
If the comparison is to the time
command which writes its results to stderr, then perhaps -PassThru
simply writes its measurements as verbose messages? If you need to capture the measurement object for use later, then perhaps we need a -MeasurementVariable
parameter?
yes / yes ;-)
Most helpful comment
Guys, you must be joking. Without showing the stdout and stderr while measuring, this command is largely useless. For a moment I rejoiced I finally found a way to measure a command execution time native to Windows, only to find that I cannot use it. I need to see the output of a command. I can hardly think of use cases where I would want to suppress stdout and stderr. This is just madness. Please, bring the output back to screen.