Error handling in PowerShell is extremely messy, by far the primary complaint I have heard about the language, and has actually been cited as a reason to not use PowerShell. The two major complaints I've heard about error handling are:
When running a binary command in PowerShell (ex: git
) and the binary errors, the command does not always output an error (neither terminating nor non-terminating). PowerShell clearly detects that the command fails and correctly populates $?
, but does not integrate with normal error handling mechanisms. The result is this cumbersome syntax:
git checkout -b master
if (-not $?) {
throw "Command failed"
}
If a command sets $?
to $false
, then the command should write an error to the error stream.
I had to do this hack last week, because DevOps agents seem to have inconsistent behavior interacting with Powershell.
$process = Start-Process "cmake" -ArgumentList "-G `"Visual Studio 15 2017`" -T `"LLVM`" -A $Architecture -S `"../..`"" -Wait -NoNewWindow -RedirectStandardError "cmake.log"
if ($process.ExitCode -ne 0) {
# Do it again to get past openSSL errors.
$process = Start-Process "cmake" -ArgumentList "-G `"Visual Studio 15 2017`" -T `"LLVM`" -A $Architecture -S `"../..`"" -Wait -NoNewWindow
Exit 0
}
Binary failures should throw exceptions just like cmdlets so that they can be handled uniformly.
PowerShell clearly detects that the command fails and correctly populates $?
It makes a guess based on the return value, but the return value can only be a number 0-255; it is a convention that 0 means "success" and "1-255" mean "failure", but it's not universal or mandatory.
Robocopy, for one example, reports many different codes for different kinds of "success"; you can robocopy dir1 dir2 /mir
and it copies two new files with no problems, and returns 1
and that means $?
is False.
Going the other way, PowerShell takes a command writing to stderr
to be a problem, and throws a NativeCommandException - but programs like to write license, or information messages to stderr
- psexec, and curl do it - and then people get a NativeCommandException, when nothing was wrong. This is also dependent on the PowerShell host, it's inconsistent.
I'm not saying it shouldn't happen, just that it will still be a bit of a guess, and sometimes wrong. (and a breaking change for every script which runs a native command).
The convention is returning 0 for success, but this is not a standard. The No.1 principle is to not guess. The variable $?
is giving you whether the last command returned 0, not whether the last command has an error. Aside from robocopy
, fc
uses the return value to indicate whether the files are different, in which case automatically writing an error record is an error. The user might want to and could inspect $LASTEXITCODE
for error handling.
Returning 0 is a POSIX standard. I think PowerShell should be aligned with the rules, not the exceptions to the rules; aligning to the rules means the command behaves generally according to intuition, whereas aligning to the exceptions means adding an if
statement to every shell command, which I think is very cumbersome, especially considering one of PowerShell's differentiating features is error handling.
Common utilities that need to play better with PowerShell already have aliases set up (including fc
). PowerShell team could very feasibly provide a similar solution for robocopy
, especially considering robocopy
is not cross-platform.
@HumanEquivalentUnit, regarding the inconsistent handling of stderr output, see:
@zachChilders, as an aside: there is no need to use Start-Process
to invoke _console/terminal programs_ such as cmake
in a _synchronous_ manner - that's what direct invocation is for. See this Stack Overflow answer for more information.
@chriskuech fc
(as well as robocopy
) is just an example, and it has no relation whatsoever with Format-Custom
. It is also not possible to change the behavior of fc.exe
unless you want to break batch scripts on Windows.
The Wikipedia article you cited is about errno
, not exit status, which is another concept. I found here and here (page 352).
The value of status may be 0,
EXIT_SUCCESS
,EXIT_FAILURE
, [CX] or any other value, though only the least significant 8 bits (that is,status & 0377
) shall be available fromwait()
andwaitpid()
; the full value shall be available fromwaitid()
and in thesiginfo_t
passed to a signal handler forSIGCHLD
.
and
Finally, control is returned to the host environment. If the value of status is zero or
EXIT_SUCCESS
, an implementation-de铿乶ed form of the status successful termination is returned. If the value of status isEXIT_FAILURE
, an implementation-de铿乶ed form of the status unsuccessful termination is returned. Otherwise the status returned is implementation-de铿乶ed.
In particular, POSIX standard doesn't say whether exit status 123 means success or failure. Also, the implementation-defined sense of failure of a native command might not be worth an ErrorRecord
for the caller in PowerShell.
Aside from that not being a POSIX standard, the methodology of sticking to POSIX standard is good as long as you're only playing with POSIX. However, not all platforms use POSIX. It is much more important to keep consistency among platforms PowerShell would like to support. Moreover, automatically writing error records after native command invocation could create a whole lot of problems for existing scripts who didn't expect them and handled the exit code manually.
Even if we're speculating the exist status and writing an error record. The design such feature is difficult and may lead to more troubles. For example, are you throwing a terminating error or writing a non-terminating error? What if you want to inspect and wrap the error in a more object-oriented way before it is finally written to the error stream? Do you have to $ErrorActionPreference='Stop'
and try-catch
for every native command invocation?
In general, I suggest wrapping native commands in a separate PowerShell cmdlet/function, which handles all the conversion between the native, untyped, binary, process-level isolated world and PowerShell.
@mklement0 It is a very bad idea to invoke native commands directly given PowerShell's way of handling native pipe. Using Start-Process
avoids object-oriented pipe pollution while allowing the native command to print output to the screen.
I don't think there is any dispute that, even if not official, an exit status of 0
conventionally and in most cases means success and non-0
means error. I understand that there are some corner cases, but ultimately this behavior is counter-intuitive in the vast majority of cases. If you look at the issue in the "Background" section, I mention another counter-intuitive (and coincidentally improperly documented) error handling behavior (non-terminating errors) that contextualize the justification for this change.
are you throwing a terminating error or writing a non-terminating error?
You should always write a non-terminating error
What if you want to inspect and wrap the error in a more object-oriented way before it is finally written to the error stream
Use normal PowerShell ($Error
or $ErrorActionPreference = "Stop"
with try-catch
, or if implemented -ErrorVariable
). Whatever the programmer chooses, but the programmer should be able to assume that normal PowerShell will apply.
Do you have to
$ErrorActionPreference='Stop'
andtry-catch
for every native command invocation?
If the programmer chooses so. This would mean that if the programmer wants to see _why_ the command failed, they have to add boilerplate. That would mean less people have to add boilerplate than in the current scenario where the programmer has to add boilerplate just to see _if_ the command failed.
If people in this discussion agree to break backcompat, I definitely think this issue should be addressed.
I don't think there is any dispute that, even if not official, an exit status of 0 conventionally and in most cases means success and non-0 means error. I understand that there are some corner cases, but ultimately this behavior is counter-intuitive in the vast majority of cases.
I think you need some research on how many current scripts break (or show otherwise undesired error messages) if we automatically write an error when a native command returns non-zero value.
I agree that 0 usually means success. But it is quite debatable whether non-0 means error. For example, GNU diff
returns 1 for difference found and greater than 1 for errors.
W.r.t. #7774, I will make a separate comment in that thread.
Creating the error record and then catching it is bad for performance, and well-written code should always handle the error and convert them into a readable representation (instead of exit status). The demand to handle them should be prioritized.
In addition, it is always the case people want to know why a command fails. If the command has written to stderr
, one already can read it; otherwise, you want to check $LASTEXITCODE
.
Made the comment to #7774. And after a second thought, it makes no sense to throw a terminating error here because that would complicate the commands. However, I still consider writing an error for every native command returning non-0 too self-asserting and it complicates the life for actual exit status inspectors/handlers.
Turns out that this request is a duplicate of #3415, which led to lively debated RFC draft https://github.com/PowerShell/PowerShell-RFC/pull/88.
@GeeLaw
It is a very bad idea to invoke native commands directly given PowerShell's way of handling native pipe. Using Start-Process avoids object-oriented pipe pollution while allowing the native command to print output to the screen.
This is a tangent, for sure, but I find your advice highly problematic, so it is worth addressing:
A core function of any shell - including PowerShell - is the synchronous execution of console programs (command-line utilities), with the invoked program's standard streams connected to the calling shell's.
With PowerShell now being multi-platform, this is ever more important, because on Unix platforms there is a wealth of capable - and fast - command-line utilities available.
While PowerShell cannot "speak objects" with native programs, it does have a shared language, and that is _text_.
The vast majority of command-line utilities "speak text", and in the vast majority of cases you want synchronous execution and either want to _capture_ or _suppress_ text output by such utilities, which calls for _direct invocation_, not use of Start-Process
.
E.g., get the current branch of the Git repository in the current location:
PS> $currentBranch = git symbolic-ref --short HEAD
Even in the rare cases where a utility outputs _raw byte streams_ - which PowerShell's streams cannot handle - you wouldn't use Start-Process
- instead, you would synchronously invoke a shell that _can_ handle those byte streams; e.g.:
# Generate a file with 100 random bytes.
# The redirection must be handled by `sh`, because PowerShell would interpret the raw
# bytes as text.
PS> sh -c 'dd if=/dev/urandom bs=1 count=100 > t.txt'
I've summarized the considerations when capturing output from external programs in this Stack Overflow answer. I encourage you to provide feedback there, if you disagree with the above (the answer was written a while ago, pre .NET Core, so raw byte output isn't covered yet).
Most helpful comment
The convention is returning 0 for success, but this is not a standard. The No.1 principle is to not guess. The variable
$?
is giving you whether the last command returned 0, not whether the last command has an error. Aside fromrobocopy
,fc
uses the return value to indicate whether the files are different, in which case automatically writing an error record is an error. The user might want to and could inspect$LASTEXITCODE
for error handling.