I have a simple example begin does some set-up , end does some tear down, and in-between process is called more than once to work on pipeline input. All good, all well understood.
function test {
param (
[parameter(ValueFromPipeline=$true)]
[int]$p
)
begin {write-host -fore Red "Open connection"}
process {
write-host -fore Red "work for $p"
1..$p
}
end {write-host -Fore red "Close Connection"}
}
1,2 | test
Gives
Open connection
work for 1
1
work for 2
1
2
Close Connection
But if select stops the upstream pipeline this happens
1,2 | test | select -first 2
Open connection
work for 1
1
work for 2
1
The end block doesn't get run - because (AIUI) Select-Object throws a StopUpstreamCommandsException, and everything stops dead. A try / catch block doesn't _catch_ it, a finally block _does_ run but there is no way I can see for to detect that an error has been thrown ($error doesn't contain anything), and an unconditional tear down in the process block will mean only one item gets processed, so that's no good. (Yes, I can have process catch all the incoming items, and run everything in the end block wrapped in a giant try-catch-finally, but that's super-ugly, IMHO) . There doesn't seem to be an event that can be hooked...
Is there a solution out there which I just can't find, or is just plain dangerous to assume when there are no errors, the end block will run ?
Good point, but this is a duplicate of #7930 (I've just updated #3821 to link to it as well).
This issue is the precise reason I wrote all of this: #9900
Currently, there is no solution, the pipeline processor simply isn't equipped to handle the scenario by design, there is no "guaranteed" execution of any pipeline block. process{} blocks can be skipped if a downstream command's begin{} block throws an exception, too.
The PS team made it fairly clear in multiple issues that end was never intended to be a guaranteed execution and they weren't open to changing that.
The cleanup{} function block implemented in the above PR adds a block that is completely guaranteed to run regardless of errors or otherwise stopping pipelines, but I don't know where the PS team stand on it currently. They have expressed interest multiple times in the past, but it was ready to go well before 7.1 release cycle really started and has received only a few reviews, so I don't know if they're still looking to get it reviewed and accepted. The related RFC was approved, however, so... yeah, I don't know where that's at, it's in limbo as far as I know.
The PR sounds great, @vexx32, and I encourage you to give it more exposure by also mentioning it in the original issue, #7930.
Oh, I thought I had. Apparently that link in the chain was overlooked, cheers!
@mklement0 @vexx32 Thanks both. I couldn't find search terms which would find it, what I've put in is so close to #7930 that it looks like a copy :-( It looks like most of the work has been done to put this right so I'll close this one.
Most helpful comment
@mklement0 @vexx32 Thanks both. I couldn't find search terms which would find it, what I've put in is so close to #7930 that it looks like a copy :-( It looks like most of the work has been done to put this right so I'll close this one.