Currently, Start-Job
:
defaults to _different_, _fixed_ working directories on _different platforms_ (which is problematic in itself):
$HOME
on Unix (macOS, Linux)$HOME\Documents
on Windowsby contrast, using the newly-introduced Unix-like ... &
syntax defaults to the _current_ location; this discrepancy is problematic too [_update: but as designed_] - see #4267
Either way, there is no simple way to have the _caller_ set the working directory _explicitly_, leading to such painful workarounds as in this SO answer.
The proposed solution:
# Wishful thinking
> $jb = Start-Job -WorkingDirectory $PSHOME { "Hi from $PWD." }; Receive-Job -AutoRemove -Wait $jb
Hi from C:\Program Files\PowerShell\6.0.0-beta.4
PowerShell Core v6.0.0-beta.4 on macOS 10.12.5
PowerShell Core v6.0.0-beta.4 on Ubuntu 16.04.2 LTS
PowerShell Core v6.0.0-beta.4 on Microsoft Windows 10 Pro (64-bit; v10.0.15063)
Windows PowerShell v5.1.15063.413 on Microsoft Windows 10 Pro (64-bit; v10.0.15063)
If this gets added, then the language feature &
should use it instead of how it is currently implemented.
Would like to see Start-Job have -ThrottleLimit compatibility.
@Average-Bear: I suggest you create a new issue (and provide a rationale for your request there).
WorkingDirectory is a property specific to a process - and the fact that Start-Job
spins up an out-of-process runspace seems to me an implementation detail, not something you would necessarily want to bring up to the level of abstraction that a Job
provides.
Seems like the kind of thing you would want to include in the initialization script, so I'd recommend just fixing #4530 so you can do:
$jb = Start-Job { "Hi from $PWD." } -InitializationScript {Set-Location $using:PWD}; Receive-Job -AutoRemove -Wait $jb
rather than the currently super awkward:
$jb = Start-Job { "Hi from $PWD." } -InitializationScript ([scriptblock]::Create("Set-Location $PWD")); Receive-Job -AutoRemove -Wait $jb
I consider -InitializationScript {Set-Location $using:PWD}
awkward too.
Having a -WorkingDirectory
is a matter of _convenience_ first and foremost, and it also provides symmetry with Start-Process
.
You're running a script block / script _somewhere_, and it's helpful to have a simple way to control that somewhere.
This is especially true with the current behavior, where you - invisibly - run in a location _other than the current one_ (unlike when you use the new &
operator on Unix - a regrettable discrepancy - see #4267).
As an aside, re implementation detail: Understanding the underpinnings of jobs is important, because users need to be aware that a separate process and remoting are involved to understand that _deserialized_ objects are returned.
@mklement0 @SteveL-MSFT
I want to take a stab at this one and before deep diving into the implementation I wanted to see what you think about the following solutions:
1. Get the working directory from the user.
2. Verify that the directory exists
3. Inject in the beginning of the script block that will be executed a 'Set-Location $UserSpecifiedWorkingDirectory'
$WorkingDirectory
variable all the way up to the Job level
and specify the working directory there when the process startsWhat do you think? Do any of these approaches seem reasonable?
Disclaimer: this is my first issue here so I might be missing something entirely
@davinci26 I wouldn't inject anything into the script block. That could result in some surprises if/when you debug a job (i.e. you should only see your job script when you debug a job, not extra stuff that your job script didn't include). Instead I'd just set the location when the runspace associated with the job is created, before the script block is run inside of it.
I took a deep dive at the codebase and I observed the following:
Set-Location
and adding it to the operations list:Just add this
C#
var command = new Command("Set-Location");
command.Parameters.Add("LiteralPath", this.WorkingDirectory);
Pipeline tempPipeline = remoteRunspace.CreatePipeline(command.ToString());
tempPipeline.Commands.Add(command);
IThrottleOperation locationOperation = new ExecutionCmdletHelperComputerName(remoteRunspace, tempPipeline);
Operations.Add(locationOperation);
If you follow this approach then pwsh throws because the second operation
in Operations
is trying to open a runspace that is already open (see here)
Is this the intended behaviour? Can we modify the logic when we try to open the remote the runspace to skip the opening if the runspace is already open.
Set-Location
command to the pipelineThis would require to either:
Use the CreatePipeline
function available in PSRemotingCmdlets
and then insert the command in the beginning. Personally I am not a huge fan of this approach as it would be a bit slow.
Modify the CreatePipeline
function directly.
This function is consumed by all PSRemoteCmdlets
. Would the workingDirectory parameter make sense in all of them? Is this an overkill?
workingdirectory
does not work when the pwsh process runs in server mode (-s
flag). InitializationScript
argument is passed to the process startupInfo
as part of the arguments.If we stick to the implementation of (1) & (2) I do not see how we could have a workingDirectory
parameter for Start-Job
that would be able to set the working directory also for the InitializationScript
. Is this part of the requirement?
Enabling (1) would allow us to implement the working directory fairly easily since we can add an additional command line argument to when the powershell server process instance is spawned.
:tada:This issue was addressed in #10324, which has now been successfully released as v7.0.0-preview.4
.:tada:
Handy links:
Most helpful comment
I consider
-InitializationScript {Set-Location $using:PWD}
awkward too.Having a
-WorkingDirectory
is a matter of _convenience_ first and foremost, and it also provides symmetry withStart-Process
.You're running a script block / script _somewhere_, and it's helpful to have a simple way to control that somewhere.
This is especially true with the current behavior, where you - invisibly - run in a location _other than the current one_ (unlike when you use the new
&
operator on Unix - a regrettable discrepancy - see #4267).As an aside, re implementation detail: Understanding the underpinnings of jobs is important, because users need to be aware that a separate process and remoting are involved to understand that _deserialized_ objects are returned.