Azure-pipelines-tasks: Variables set via logging commands are not persistent between agents

Created on 7 Jul 2017  Â·  108Comments  Â·  Source: microsoft/azure-pipelines-tasks

I am trying to create a release where some initial processing needs to occur on a specific agent to produce a certain value. Once this value is created, it needs to be passed to all members of the specified deployment group in order for the deployment to proceed.

I'm using VSTS logging commands to set this value in a variable. Unfortunately, it appears as if the value is only set on the initial agent. The members of the deployment group do not see the variable as an environment variable, nor as a parameter. Is this the expected behavior? It's not specified here.

Release enhancement

Most helpful comment

@vijayma will we be able to use those values in other stages conditions?

### From the future release notes:
Output variables may now be used across stages in a YAML-based pipeline. This helps you pass useful information, such as a go/no-go decision or the ID of a generated output, from one stage to the next. The result (status) of a previous stage and its jobs is also available.

Output variables are still produced by steps inside of jobs. Instead of referring to dependencies.jobName.outputs['stepName.variableName'], stages refer to stageDependencies.stageName.jobName.outputs['stepName.variableName']. Note: by default, each stage in a pipeline depends on the one just before it in the YAML file. Therefore, each stage can use output variables from the prior stage. You can alter the dependency graph, which will also alter which output variables are available. For instance, if stage 3 needs a variable from stage 1, it will need to declare an explicit dependency on stage 1.

All 108 comments

Correct. Output variables as a feature is about to go into preview

Hello,

Are there any updates regarding the availability of this feature?

Regards,
Jacob Henner

Discovered this similar limitation, was going to also post issue under title "Persisting Variables to different Phases", but checking in here.

My scenario is I use Azure CLI task to query some data , this runs on a Hosted VS2017 Agent Phase. Then I need ansible which does not run on Windows - New phase created using Hosted Linux Agent. Azure CLI task is not available on Hosted Linux agent, ansible cannot run on Windows agents.

@bryanmacfarlane Any update on release of this preview feature?

@JacobHenner Have you considered using add build tag, artifact upload file, or somewhat similar hack to achieve variable storage between phases? Ref: https://github.com/Microsoft/vsts-tasks/blob/master/docs/authoring/commands.md. I am tempted to attempt.

In my scenario, I have only Windows based Agents. Would really love to see this! Today, to mitigate this, we are writing this info to a shared queue which being read by the next phase.

Possible Workaround: Use Azure Key Vault

Shove values into Azure Key Vault using AZ CLI. Download the values from Azure Key Vault in the phases you the data need it in.

Note: When pulling down the values, using the Azure Key Vault specific task will redact the "secret" values pulled down across the phase. Only one irksome issue I had with this: You cannot explicitly set the secret value to environment variable. Since I'm not dealing with actual secrets, and if you want to circumvent that, you can instead use AZ CLI task to pull down the value and store it to environment variable.

Hmm, yeah any kind of storage - KV if the environment variables are secrets, or you can just use Storage/ServiceBus or even CosmosDb with minimal schema to get this to work! You can also use fileshare and save this data in files if you can provide restricted access.

Environment

  • Server - VSTS

    • This occurs in any account, team project, or release definition
  • Agent - Hosted

    • Currently using on Hosted agent but I suspect it will happen on the private agent as well

Issue Description

I have a infrastructure as code scenario where I have an ARM template deploying a resource group where one of the objects is a storage account. The storage account name is an output variable on the ARM template and I have a follow up powershell task that inspects the deployment and promotes certain ARM output variables to VSTS release variables to be available for down stream processing. The source contents of the storage account container are in housed in another source code (git) repo. I have another phase configured to copy those contents from the alternate artifacts source. My $(storageAccountName) variable isn't making it across to the second phase.

I don't yet have a need to add Azure Key Vault to my architecture yet and adding it for a automated deployment doesn't seem appropriate at this time.

Error logs

No error to report other than the variable recall fails

We have a multiple deployment group scenario where each deployment group needs some core variables set up in the first stage. The work around is to re-initialize these variables for each deployment group.

What is the status of output variables feature?
https://github.com/Microsoft/vsts-agent/pull/905
This pull request has gone into 2.120.2 agent version which is what we use, yet its not documented and doesn't seem to work.

does this work with agent less tasks ?

@rmarinho, We don't support output variables in agent less tasks today but we do have plans to add support for it there. It will be great if you can share your scenario for output variables in agentless task.

@bansalaseem the scenario is simple:

We have a build phase that pushes an App to run UITEsts that take let's say 3 hours.. when i push i get a ID for that test run.. and i set it as a output variable.

Then in my agentlessphase i can pass that testrun id to my azure function or to other rest endpoint to check if my UITests finished.

This is a must have feature ,

My headers could be something like this , notice $(uitest.OutputAppCenterTestRunId)

{
"Content-Type":"application/json", 
"PlanUrl": "$(system.CollectionUri)", 
"ProjectId": "$(system.TeamProjectId)", 
"HubName": "$(system.HostType)", 
"PlanId": "$(system.PlanId)", 
"JobId": "$(system.JobId)", 
"TimelineId": "$(system.TimelineId)", 
"TaskInstanceId": "$(system.TaskInstanceId)", 
"AuthToken": "$(system.AccessToken)",
"X-API-Token" : "$(AppCenterApiToken)",
"AppCenterTestRunId"  : "$(uitest.OutputAppCenterTestRunId)"
}

@bansalaseem any feedback? i kinda am block here, if agentless can't use a modified variable how will it be used for other thing else then just messing with the VSTS build api?

@rmarinho you can use output variables across phases in a yaml definition today. I believe this is coming to designer definitions soon (today variables at the phase level not available in the designer).

@ericsciple it doesn't work for agentless phases the variable seem to not be available there

that surprises me. do you have a small repro I can follow?

@rmarinho this works for me:

phases:
- phase: myAgentPhase
  queue: Hosted Linux Preview
  steps:
  - script: echo "##vso[task.setvariable variable=urlPath;isoutput=true]post"
    name: myScript

- phase: myServerPhase
  dependsOn: myAgentPhase
  server: true
  variables:
    urlPath: $[ dependencies.myAgentPhase.outputs['myScript.urlPath'] ]
  steps:
  - task: invokeRestApi@1
    inputs:
      serviceConnection: httpbin # This is the name of the service endpoint to use
      headers: |
        {
          "Content-Type": "application/json"
        }
      urlSuffix: $(urlPath)

@ericsciple is it possible to make it work in release definitions? What syntax shal I use in the Phase 2 to get variable from Phase 1?

I too am blocked by this. Above I saw someone ask for scenarios so I am going to give mine.

I am using RM to release my software and also produce the release notes. These release notes get emailed to hundreds of clients.

I want to have a phase (or environment) which creates the release notes email in our email software (Campaign Monitor). It only creates a draft campaign. When it is created, it returns a campaign id.

I then need a human to go and check that the release notes are all ok. Once it is ok'd, I would expect to come back to RM and Approve the next phase or environment. This would then send the campaign, however it needs the campaign id from the previous phase or environment in order to know which campaign to send.

This is known feature work ( enhancement ) that’s coming online incrementally. The engine pieces are in place. It works in yaml. What’s next is the designer will allow you to “map” outputs from one phase to inputs of another. Yaml is also in the process of coming to release management.

good to hear. Also blocked by this. Have a workaround but not very elegant.

@timdeboer what's your workaround ?

@timdeboer , I would like to know the workaround too please.

@ericsciple sorry I was talking on RD and there's where it seems it doesn't work.

Sorry to give everyone false hope but our inelegant workaround is to re-run the task that creates the variables on every phase :/

We use Storage account, tables to be exact for key/value storage of variables. We also used KeyVault, works in similar fashion. Essentially ship a storage account with the variables as part of deployment.

To achieve this, took 8 or so steps in the pipeline, roughly 220 lines of bash helper code (variables into arrays, custom IFS, spaghetti code garbage), and Linux only until write a powershell version to pull variables from storage. Works 100%, but wouldn't particularly recommend - would instead focus efforts into YAML version of pipelines instead, see comments below.

If you aren't aware, variable shipping between phases is enabled via YAML today, but the WebUI/designer hasn't shipped the enabling knobs completely yet. Would be neat to here a ballpark ETA for it being enabled in designer UI.

In meantime I am getting recommendations to try the YAML (in build pipelines there is a "View YAML" button under the overall process step, don't see it in release pipelines yet).

We are working on the designs in to map the output variables in the designer, and there is some work needed in the backend to support this in designer based workflows. Based on current visibility, it would be another 3-4 months away. Please stay tuned.

What about YAML definitions in Release Management? Can you please add the
microsoft alias josleep with any updates to either of these fronts? I'd be
happy to be an early tester.

On Wed, May 16, 2018 at 11:14 AM, RoopeshNair notifications@github.com
wrote:

We are working on the designs in to map the output variables in the
designer, and there is some work needed in the backend to support this in
designer based workflows. Based on current visibility, it would be another
3-4 months away. Please stay tuned.

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Microsoft/vsts-tasks/issues/4743#issuecomment-389615409,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGPOirNKOleJvP956B7FCk-GMSQ4TMiGks5tzGyfgaJpZM4OQ-40
.

Yes we are actively working on YAML for CD, you can follow the pull requests on GitHub

We are working on the designs in to map the output variables in the designer, and there is some work needed in the backend to support this in designer based workflows. Based on current visibility, it would be another 3-4 months away. Please stay tuned.

@RoopeshNair any update on WebUI/designer gaining the output variables ability that YAML already has?

@nictrix - sorry, the work has been delayed due to other priority items. @shashban - for the new time lines.

@shashban per @RoopeshNair do you have a new timeline in regards to the WebUI/designer gaining the output variables ability that YAML currently has as a feature.

As YAML is evolving, we are actively working on converging on the capabilities in designer and in YAML. Ability to flow variables across phases is one of such capabilities that is being developed. Please stay tuned. Based on current estimates, 10-12 weeks is what we are talking about.

@shashban has this been implemented? I'm using the UI and variables still don't flow in-between phases for me

Also looking for an update on this issue. Not being able to pass anything between stages is a major pain point.

I am afraid we had to deprioritize this over development for CD in YAML pipelines. Our current recommendation is to use special tasks that update variable values in key vault/ release variables in order for them to pass across phases and stages.

Why is this item closed? As far as I can tell, you still cannot do this in the Designer (non-YAML) without the work-arounds listed throughout this thread.

@shashban @roopeshnair why close such an important topic?? This can be marked for next versions!

Adding my voice to this. We have release (CD) pipeline with 3 job phases. So not a Build pipeline.
1 - Agent Phase, Deploy Infrastructure and App to Azure (ARM Template) & Parse and publish output variables from the deployment to a set of custom pipeline variables. Things like the URL of the app which in our usecase is non deterministic.
2 - Agentless phase that waits for manual intervention, sends email containing those custom variables
3 - Agent Phase that tearsdown deployment

The updates to the variables in Phase 1 are lost when moving to Phase 2 and Phase 3. The work around to publish the variables elsewhere are not possible given I want to access those updated custom variables in the agentless phase so I can include them in the notification email.

Is there any update?
I also need to pass the value of a variable between two agent job in release pipeline but the variable values are not persistent between agents.

It is not currently supported in releases (designer based). We have been working on adding CD features to multi-stage YAML pipelines. YAML pipelines supports variables across jobs.

2 Years and not a single solution for this?
Did anyone found a solid workaround other then using external holder (text file / DB ) ?

We have added this support in multi-stage YAML pipelines: Set a multi-job output variable.

We don't have any timelines for supporting this in designer based releases at this point in time.

My usecase:

I have Pipeline Variables that need to be set in Stage1. These will be references in a pre-deployment Gate for Stage2. The variables set Start and End times for Service Now Integration in our release pipeline pre-deployment gate.

We support output variables across jobs in YAML pipelines. If you need to pass them across stages, you will need to write them to a persistent store and retrieve it across the stages.

We have added this support in multi-stage YAML pipelines: Set a multi-job output variable.

We don't have any timelines for supporting this in designer based releases at this point in time.

This works between Jobs but not between Stages.

I'm waiting too for the UI editor layer compatibility for cross-stage / cross-job vars.

In the meantime I've used this alternative workaround that modifies the Release definition via the Azure DevOps API, hope this is of some use for anyone.

Read the article and maybe use the following powershell function that should handle updating a cross-stage var (defined at the release pipeline level):

function Invoke-SetAzureCDVariable {
  [CmdLetBinding()]
  param (
    [Parameter(Mandatory = $true)]
    [Alias("name")]
    [string]  $variableName,
    [Parameter(Mandatory = $true)]
    [Alias("value")]
    [string]  $variableValue,
    [Parameter(Mandatory = $false)]
    [bool]  $crossStage = $false
  )
  if (-not $crossStage) {
    Write-Host "setting Azure CD variable '{0}'='{1}'" -f $variableName, $variableValue -Verbose:$VerbosePreference
    Write-Warning "the variable and its value is visibile only to downstream tasks!" -Verbose:$VerbosePreference
    Write-Host ("##vso[task.setvariable variable={0};]{1}" -f $variableName, $variableValue)
    return
  }

  $releaseUrl = ('{0}{1}/_apis/release/releases/{2}?api-version=5.0' -f `
    $($env:SYSTEM_TEAMFOUNDATIONSERVERURI), `
    $($env:SYSTEM_TEAMPROJECTID), `
    $($env:RELEASE_RELEASEID) `
  )

  $authHeader = "Bearer $env:SYSTEM_ACCESSTOKEN";
  # get Release Definition
  Write-Host "getting release definition from URL: $releaseUrl" -Verbose:$VerbosePreference
  $releaseDef = Invoke-RestMethod -Uri $releaseUrl -Headers @{
    Authorization = $authHeader
  }

  #output current Release Pipeline vars
  # Write-Output ('Release Pipeline variables output: {0}' -f $($releaseDef.variables | ConvertTo-Json -Depth 10))

  # update var
  Write-Host "setting Azure CD cross-stage variable '{0}'='{1}'" -f $variableName, $variableValue -Verbose:$VerbosePreference
  $releaseDef.variables.($variableName ).value = $variableValue

  # update release definition
  Write-Host "updating release definition..." -Verbose:$VerbosePreference
  $json = @($releaseDef) | ConvertTo-Json -Depth 99
  Invoke-RestMethod -Uri $releaseUrl -Method Put -ContentType "application/json" `
    -Body $json `
    -Headers @{Authorization = $authHeader }

  Write-Host "variable set in release definition" -Verbose:$VerbosePreference
}

It seems plausible that we could use JSON files and pipeline artifacts to pass variables to other stages as a workaround? Our own scenario is slightly different, the build creates a JSON file that passes some info to an old school release

  - task: file-creator@5
    inputs:
      fileoverwrite: true
      skipempty: true
      filepath: '$(System.DefaultWorkingDirectory)/manifest.json'
      filecontent: > # Unclear is this ">" is being respected. Newlines are being retained.
        { 
          "var1": "${{ parameters.param1}}", 
          "var2": "${{ parameters.param2}}", 
          "var3": "$(localVar1)" 
        }
      endWithNewLine: true

  - task: PublishPipelineArtifact@1
    displayName: 'Publish Pipeline Artifact: manifest.json'
    inputs:
      path: '$(System.DefaultWorkingDirectory)/manifest.json'
      artifact: meta-pipeline-artifact

Then in the next stage do something like this

- task: DownloadPipelineArtifact@2
  displayName: 'Download Pipeline Artifact'
  inputs:
    artifactName: 'meta-pipeline-artifact'

# Insert powershell/script to read the json file and set local vars

This could all be templated to keep things relatively clean

We are using this task for creating files: https://marketplace.visualstudio.com/items?itemName=eliostruyf.build-task

Disclaimer: we are not using multistage pipelines, just multiple build jobs and old school release jobs. But I don't see why this wouldn't work

i opened https://github.com/MicrosoftDocs/vsts-docs/issues/6100 similarly, but got a confusing response about using multi job output variables...do these work across stages, or not. @Exodus indicated they do not work across stages still https://github.com/microsoft/azure-pipelines-tasks/issues/4743#issuecomment-539585951

@ArchSerpo nice workaround, but unfortunately won't work when agentless jobs are involved (can't publish nor download).

This is a real nightmare when it comes to working with YAML pipelines and deployments.

Let's say you have a repo where the content changes infrastructure (e.g., Terraform modules). Some changes to the repo require deployment (and approvals, like manual intervention to click a button); some changes (like an update to the README) do not require deployment and can bypass those steps.

Based on some criteria, you can figure out "this is a deployable change" vs. "this is not a deployable change." Maybe it's looking at the changes in the repo via git logs.

  • Approvals and checks on an environment work at the stage level. You can't even start the stage if you don't get the approval, which means even if there's a job inside the stage that runs _before_ the deployment step... you still have to get approval. Thus, you can't do your check in the same stage as the deployment.
  • The Manual Intervention task isn't available in YAML pipelines, so you can't use that instead of approvals.
  • As noted all over in this issue, you can't share variables across stages, so you can't create a condition on the stage with the deployment.

I'm now trying to figure out weird workarounds where I split the pipeline into two YAML files and somehow dynamically kick off the deployment with, say, the Azure DevOps REST API or something. It's pretty painful.

@tillig if you're resorting to the Azure DevOps API, might as well just use it to set a release-level variable, see @namtab00 's comment: https://github.com/microsoft/azure-pipelines-tasks/issues/4743#issuecomment-545103841 (this is what I'm doing)

might as well just use it to set a release-level variable

I'm using YAML pipelines, not releases, so there's no release-level to be had. However, I will look to see if there's a similar build-level variable that I could set. I'm not really sure as yet how build variables apply to deployment jobs in a YAML pipeline; you definitely don't get the source repo info the way you do if it's a build related job even though it's the same YAML file and the same pipeline.

On yaml i was able to pass variables betwen jobs, but not between stages :(

Here's a example of getting a variable of other job that runs in other machine.

https://github.com/xamarin/Xamarin.Forms/blob/master/azure-pipelines.yml#L136

But why can't i do this for a different stage, i wanted to move my yaml to use stages but i can't because of this right now ?

It appears you can't update parameters on a build definition though there is a REST API for it. At least, I can't get it to work.

$headers = @{ Authorization = ("Bearer {0}" -f $env:SYSTEM_ACCESSTOKEN) }
$uri = "$(System.CollectionUri)$(System.TeamProject)/_apis/build/builds/$(Build.BuildId)?api-version=5.1"

# This does work - I can get the data on the current build.
$build = Invoke-RestMethod -Uri $uri -Method Get -Headers $headers

# Parameters are a JSON structure inside a string. Setting a string property
# automatically handles \" to " unescaping.
# Also, possible issue where PATCH requires ID in the body.
# https://developercommunity.visualstudio.com/content/problem/568004/azure-devops-agents-update-api-patch-requires-the.html
$parameterStructure = @{ id = $(Build.BuildId); parameters = $build.parameters }
$parameters = $parameterStructure.parameters | ConvertFrom-Json
$parameters | Add-Member -Name "testname" -Value "testvalue" -MemberType NoteProperty

# Update the serialized string version of the parameters.
$parameterStructure.parameters = $parameters | ConvertTo-Json -Compress -Depth 99
$updateBody = $parameterStructure | ConvertTo-Json

# This _looks_ right...
# {
#  "id": 1234,
#  "parameters": "{\"system.pullRequest.pullRequestId\":\"10\",\"system.pullRequest.sourceBranch\":\"refs/heads/feature/build\",\"system.pullRequest.targetBranch\":\"refs/heads/master\",...\"testname\":\"testvalue\"}"
# }
Write-Host $updateBody

# When invoking the PATCH on the build the response does NOT have the updated
# set of parameters. It's still just the original set.
# https://docs.microsoft.com/en-us/rest/api/azure/devops/build/builds/update%20build?view=azure-devops-rest-5.1
Invoke-RestMethod -Uri $uri -Method Patch -Headers $headers -Body $updateBody -ContentType "application/json"

It's a PATCH not a PUT, so... maybe I'm not formatting the body right?

You can get around this by writing the variables to an artifact. It's a bit hacky, but way less then any HTTP API I'd say. You then add a Prepare job to the remaining stages that just downloads the artifact and sets the variables as output variables that the rest of the jobs in that stage can use.

Artifacts won't work in a YAML deployment with approvals where you don't want to deploy or start the approval process based on the value of the variable. Approval happens at a stage level, so the approval would kick off before your prepare job can run and get the build artifact that would otherwise have stopped the approval to begin with.

@tillig I don't think it's a body formatting issue, because when I tried to change something else, specifically keepForever (build retention) it did work (didn't need the Id property BTW).

Strange thing is, the returned Build response always said keepForever was True, regardless of the actual state. So the fact that the parameter change wasn't reflected in the returned JSON would not have necessarily meant it wasn't changed (looks like it can't be trusted). However, printing the actual parameters, I can confirm that indeed parameter changes don't effectively take place.

The good news is, I was able to use Variable Groups to achieve Build-level dynamic job execution conditions:

  1. Create a variable group containing a Skip variable and link it to your build's variables
  2. Add a custom condition on your job such as eq(variables['Skip'], 'False')
  3. Update the Variable Group's Skip variable using the REST API according to whether you want to skip or not, before the job's run condition is evaluated: https://stackoverflow.com/a/56558502/67824

Of course this might be problematic for concurrent builds, creating a possible race condition of one build modifying the variable group to be observed by another. In that case, you can make the variable group variable's name itself dynamic, and specifically unique for the build. For example set your condition to something like eq(variables[variables['Build.BuildId']], 'False') and use the REST API to create a variable with the same name in your variable group (in the example above, $(Build.BuildId)). Note that if you go that route and have many builds, you'd might want to clean up the variable group from time to time (probably programatically), as I reckon you'd run into a limit at one point or another (not sure how many variables a single group can hold).

@tillig I don't think it's a body formatting issue, because when I tried to change something else, specifically keepForever (build retention) it did work (didn't need the Id property BTW).

Strange thing is, the returned Build response always said keepForever was True, regardless of the actual state. So the fact that the parameter change wasn't reflected in the returned JSON would not have necessarily meant it wasn't changed (looks like it can't be trusted). However, printing the actual parameters, I can confirm that indeed parameter changes don't effectively take place.

The good news is, I was able to use Variable Groups to achieve Build-level dynamic job execution conditions:

  1. Create a variable group containing a Skip variable and link it to your build's variables
  2. Add a custom condition on your job such as eq(variables['Skip'], 'False')
  3. Update the Variable Group's Skip variable using the REST API according to whether you want to skip or not, before the job's run condition is evaluated: https://stackoverflow.com/a/56558502/67824

Of course this might be problematic for concurrent builds, creating a possible race condition of one build modifying the variable group to be observed by another. In that case, you can make the variable group variable's name itself dynamic, and specifically unique for the build. For example set your condition to something like eq(variables[variables['Build.BuildId']], 'False') and use the REST API to create a variable with the same name in your variable group (in the example above, $(Build.BuildId)). Note that if you go that route and have many builds, you'd might want to clean up the variable group from time to time (probably programatically), as I reckon you'd run into a limit at one point or another (not sure how many variables a single group can hold).

Hello,

I have dynamic variables in variables group for example Variable name like "MywindowsIPAddess" and value is "198.0.0.0". I would like to use this dynamic variable value in custom condition.

I have one release definition with one agent phase with multi-configuration option and with single task . This single task should be executed in multiple servers based on these dynamic variables provided in custom condition.

Is these any way to access variables (without hardcoding) from custom conditions.

I am looking something like this...
and(succeeded(), eq(variables[$(machineName)IPAddress], '190.2.3.4'))

I am looking something like this...
and(succeeded(), eq(variables[$(machineName)IPAddress], '190.2.3.4'))

Shouldn't be a problem, check out the Azure Pipelines Expressions docs: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops

In your case it can be something like:
eq(variables[format('{0}IPAddress', variables['Agent.MachineName'])], '190.2.3.4')

It appears you can't update parameters on a build definition though there is a REST API for it. At least, I can't get it to work.

$headers = @{ Authorization = ("Bearer {0}" -f $env:SYSTEM_ACCESSTOKEN) }
$uri = "$(System.CollectionUri)$(System.TeamProject)/_apis/build/builds/$(Build.BuildId)?api-version=5.1"

# When invoking the PATCH on the build the response does NOT have the updated
# set of parameters. It's still just the original set.
# https://docs.microsoft.com/en-us/rest/api/azure/devops/build/builds/update%20build?view=azure-devops-rest-5.1
Invoke-RestMethod -Uri $uri -Method Patch -Headers $headers -Body $updateBody -ContentType "application/json"

It's a PATCH not a PUT, so... maybe I'm not formatting the body right?

I just tried same approach. With a POST (by mistake), it looks like it is updated (the payload returned by the "update" call) - but on next stage, only initial parameters are available.

With a PATCH, same as @tillig

It would be great to be able to pass variable between stages to condition stage themselves.

any update on this? Need also a way to pass variables accros stages

It has been three years since this was opened. Is anybody working on this?

Passing variables between agents is supported in YAML pipelines. I would recommend using YAML pipelines for the same.

@RoopeshNair i believe between stages is still not supported?

@vijayma for stages support.

@RoopeshNair the documentation you linked demonstrates passing variables between jobs, not stages. I and many others in this thread have tried between stages and it is not supported.

From the Documentation you linked it states:
“Some tasks define output variables, which you can consume in downstream steps and jobs within the same stage.“
“Multi-job output variables only work for jobs in the same stage.”

We need output variable support between stages.

This work has just been completed, and it will roll out within the next few weeks. Thanks. Please look out for it in the release notes.

@vijayma will we be able to use those values in other stages conditions?

@vijayma will we be able to use those values in other stages conditions?

### From the future release notes:
Output variables may now be used across stages in a YAML-based pipeline. This helps you pass useful information, such as a go/no-go decision or the ID of a generated output, from one stage to the next. The result (status) of a previous stage and its jobs is also available.

Output variables are still produced by steps inside of jobs. Instead of referring to dependencies.jobName.outputs['stepName.variableName'], stages refer to stageDependencies.stageName.jobName.outputs['stepName.variableName']. Note: by default, each stage in a pipeline depends on the one just before it in the YAML file. Therefore, each stage can use output variables from the prior stage. You can alter the dependency graph, which will also alter which output variables are available. For instance, if stage 3 needs a variable from stage 1, it will need to declare an explicit dependency on stage 1.

The above release notes will go out with our next Sprint deployment. It is not yet there. You can use the output variables in the stage conditions.

@vijayma How will this work when rerunning specific stages that rely on a previous stage’s variable(s)?

@zhilbug, that is no different from how any other dependency works (for e.g., artifacts). We have a snapshot of all the artifacts and variables from the previous execution and will be using those.

@vijayma oh man, this is a game changer! i think this makes pipelines turing complete

This work has just been completed, and it will roll out within the next few weeks. Thanks. Please look out for it in the release notes.

@vijayma Great news, thanks! This will make things a lot easier. Could you be more specific to which workitem this improvement belongs to on the release notes page?

This work has just been completed, and it will roll out within the next few weeks. Thanks. Please look out for it in the release notes.

@corradin will there also be web UI support or YAML pipeline only?

This work has just been completed, and it will roll out within the next few weeks. Thanks. Please look out for it in the release notes.

@corradin will there also be web UI support or YAML pipeline only?

@namtab00 I believe you want to redirect your question to @vijayma

@corradin The concept of multiple stages and passing variables across is relevant only for YAML pipelines and for classic release pipelines. It is coming soon to YAML pipelines. It is not relevant for classic build pipelines defined in the UI since you cannot define multiple stages in those pipelines.

@corradin The concept of multiple stages and passing variables across is relevant only for YAML pipelines and for classic release pipelines. It is coming soon to YAML pipelines. It is not relevant for classic build pipelines defined in the UI since you cannot define multiple stages in those pipelines.

@vijayma thank you for a comprehensive response.

Really curios to see the UI for cross-stage vars in release pipelines.

The workaround I've described above, using Release variables written to via API, was admittedly really clunky and error prone.

@vijayma Can you point to the documentation to this area? And how to refer it in classic release pipelines

Sent from Outlook Mobilehttps://aka.ms/blhgte


From: namtab00 notifications@github.com
Sent: Friday, April 24, 2020 10:38:05 PM
To: microsoft/azure-pipelines-tasks azure-pipelines-tasks@noreply.github.com
Cc: Abdulsalam, Azlam azlam.abdulsalam@accenture.com; Manual manual@noreply.github.com
Subject: [External] Re: [microsoft/azure-pipelines-tasks] Variables set via logging commands are not persistent between agents (#4743)

This message is from an EXTERNAL SENDER - be CAUTIOUS, particularly with links and attachments.


@corradinhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_corradin&d=DwMCaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=1gQkLqWJ0FVakrZpcYuRfbSfKnJcxUN9BD1bMOCTMSA&m=BkqKlvcXda9x4Zi54EzjfAsm2S5XtW1D20Bde-37NlU&s=5W_xoE8p52W1Y96LuvgPnNQWMtnRMTn_FfUSSC0j76g&e= The concept of multiple stages and passing variables across is relevant only for YAML pipelines and for classic release pipelines. It is coming soon to YAML pipelines. It is not relevant for classic build pipelines defined in the UI since you cannot define multiple stages in those pipelines.

@vijaymahttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_vijayma&d=DwMCaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=1gQkLqWJ0FVakrZpcYuRfbSfKnJcxUN9BD1bMOCTMSA&m=BkqKlvcXda9x4Zi54EzjfAsm2S5XtW1D20Bde-37NlU&s=xAVBB49zPOkkZfEjqgSUUGo3ww-0NmEsrFkkKTXIQn0&e= thank you for a comprehensive response.

Really curios to see the UI for cross-stage vars in release pipelines.

The workaround I've described above, using Release variables written to via API, was admittedly really clunky and error prone.

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_microsoft_azure-2Dpipelines-2Dtasks_issues_4743-23issuecomment-2D618983423&d=DwMCaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=1gQkLqWJ0FVakrZpcYuRfbSfKnJcxUN9BD1bMOCTMSA&m=BkqKlvcXda9x4Zi54EzjfAsm2S5XtW1D20Bde-37NlU&s=AWdqqjfqEdCLuR0HefClYEwTxNqyXXYKmHkkEzB2ASU&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AKN5RJC74HTKSHFKN3QG2N3ROGB23ANCNFSM4DSD5Y2A&d=DwMCaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=1gQkLqWJ0FVakrZpcYuRfbSfKnJcxUN9BD1bMOCTMSA&m=BkqKlvcXda9x4Zi54EzjfAsm2S5XtW1D20Bde-37NlU&s=6JmuztTHx_sjikN1xYvSzLwb_-hf_P6ULmUi_1ahXMM&e=.


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. Your privacy is important to us. Accenture uses your personal data only in compliance with data protection laws. For further information on how Accenture processes your personal data, please see our privacy statement at https://www.accenture.com/us-en/privacy-policy.


www.accenture.com

The workaround I've described above, using Release variables written to via API, was admittedly really clunky and error prone.

You don't need to use the REST work around. You could also write your vars to a json file as a build artifact and consume that same artifact in a subsequent stage.

@RoopeshNair or @shashban can you comment on stage outputs in classic release pipelines? I am not too sure about what is possible there.

Hi everyone! Is it possible to use stageDependencies for stage conditions?

I'm trying to do next:

- stage: deploy_to_staging
    displayName: Deploy to staging
    dependsOn: testing
    condition: eq(stageDependencies.testing.resolve_variables.outputs['variables.NEED_DEPLOY_TO_STAGING'], 'True')
    jobs:
      - job: build_for_staging    

and it doesn't work, but on job level it works:

- stage: deploy_to_staging
    displayName: Deploy to staging
    dependsOn: testing
    jobs:
      - job: build_for_staging
        condition: eq(stageDependencies.testing.resolve_variables.outputs['variables.NEED_DEPLOY_TO_STAGING'], 'True')

and it's not good for me, because I can't skip whole stage depends on previous stage job output...

@dhammma , I think that the last update gonna help us:

https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-168-update#azure-pipelines-1

@stealthcold I hoped that last update helps me

image

but, seems like I can use stageDependencies.stageName.jobName.outputs['stepName.variableName'] only for jobs, but not stages

What I'm trying to do:

stageA -> jobA defines some variables
stageB and stageC should depends on those variables

Everything that I got at this moment - I can use stageA.jobA in stageB and stageC jobs conditions. But can't use stageDependencies.stageA.jobA.outputs['stepName.variableName'] for stageB and stageC conditions:

An error occurred while loading the YAML build pipeline. Unrecognized value: 'stageDependencies'. Located at position 4 within expression: eq(stageDependencies.testing.resolve_variables.outputs['variables.NEED_DEPLOY_TO_TESTING'], 'True'). For more help, refer to https://go.microsoft.com/fwlink/?linkid=842996

And it's very weird for me. If I can use stageDependencies at job level conditions, why I can't use them at stage level conditions...

@sit-md I didn't tried yet. I'll try it later, but it seems like this is not thing which I want to get

@dhammma As far as I know this is currently not possible. I had the same issue with environment-specific variables. I resolved it by putting the needed values into variable groups that are linked into the pipeline depending on a condition. Of course if you need to compute something during the run this doesn't help you.
What could work is defining a variable on the pipeline level and then setting the value in your script. Never tried it though.

@sit-md i thought @Falven confirmed we could ise them for stage conditions in this response https://github.com/microsoft/azure-pipelines-tasks/issues/4743#issuecomment-614721900 If not oh boy what a Huge miss! This is the most important part

@drdamour That is true and on the job level in another stage that worked for me. It didn't work for me on a stage level itself though. Of course there's a good chance that I missed something but those were my findings when trying out the new feature.

Out of curiosity, should output variables be usable in a deployment environment name?

I've got a multi-stage build with a build, test and deploy_dev stage... I can use variables like Build.SourceBranchName in the environment name but not output variables.

I'm running agent version 2.168.2.

None of the commented out environment fields works. Is it something simple I'm missing?

Here is the truncated/abbreviated snippet (full repro yaml here):

stages:
- stage: build
  jobs:
  - job: build
    steps:
    - pwsh: Write-Host "##vso[task.setvariable variable=InstanceName;isOutput=true]Example"
      name: init
      displayName: Initialize

- stage: deploy
  dependsOn: [build, test]
  displayName: Deploy to Development
  jobs:
  - deployment: deploy_dev
    pool:
      name: development
    variables:
      InstanceName: $[stageDependencies.build.build.outputs['init.InstanceName']]
    #environment: "development-$[stageDependencies.build.build.outputs['init.InstanceName']]" - NOPE
    #environment: "development-$(stageDependencies.build.build.outputs['init.InstanceName'])" - NOPE
    #environment: "development-$(InstanceName)" - NOPE
    environment: "development-$(Build.SourceBranchName)" # THIS WORKS BUT ISN'T THE VALUE I WANT

I am currently running a hosted agent 2.169.0 and I was able to access the output variables from inside a "deployment" job as you have indicated above.

However what lead me here is that I am not able to access the output of a deployment job from any other job, just steps inside the deployment job. It doesn't seem to matter if I use powershell to write an output value or if I use the deploymentOutputs argument when using an AzureResourceManagerTemplateDeployment task.

Has anyone had any luck accessing the output from a deployment job from another stage or job?

@vijayma will we be able to use those values in other stages conditions?

### From the future release notes:
Output variables may now be used across stages in a YAML-based pipeline. This helps you pass useful information, such as a go/no-go decision or the ID of a generated output, from one stage to the next. The result (status) of a previous stage and its jobs is also available.

Output variables are still produced by steps inside of jobs. Instead of referring to dependencies.jobName.outputs['stepName.variableName'], stages refer to stageDependencies.stageName.jobName.outputs['stepName.variableName']. Note: by default, each stage in a pipeline depends on the one just before it in the YAML file. Therefore, each stage can use output variables from the prior stage. You can alter the dependency graph, which will also alter which output variables are available. For instance, if stage 3 needs a variable from stage 1, it will need to declare an explicit dependency on stage 1.

Will this be available in classic release pipelines?

I'm looking to feed the outputs of an agent phase/job into another agent phase/job within the same environment/stage so that I can have a dynamically set multiplier variable. Right now there is no way to programmatically set a multiplier for multi-configuration deployments.

We maintain several "evergreen" configurations of our software for testing purposes, each deployed as a multiplied job. This means that adding or retiring an example config requires changing a release variable.

The latest azure documentation says that 'stageDependencies' variables are allowed inside of conditions. But in reality, they are not.

See links below and example:

Extract from official ms documentation: (29/May/2020)

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
       name: printvar
- stage: B
  condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo hello from Stage B

What I'd really like to know is whether this is an error of the documentation or that the documentation was updated too early and the feature is coming?

I really need this feature.


Update: (09/06/2020)

I got a VSTS issue tracker reply and their engineers stated that this feature is already rolled out for some customers and will be fully rolled out in the next few days. I'll provide a further update soon when the feature is rolled out.

https://developercommunity.visualstudio.com/content/problem/1056647/documentation-on-stageddependencies-is-not-working.html?childToView=1070931#comment-1070931


@jonstelly I asked the same question on reddit, and it looks like using runtime variables in an environment name is not supported. the environment and other resources are authorized at the beginning of a pipeline before jobs run so they only have access to certain variables set before runtime:

https://old.reddit.com/r/azuredevops/comments/gvo3ml/variables_across_stages/fsrbh84/

Also mentioned here in the last paragraph before "Request an Agent" it mentions environment names: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops#process-the-pipeline

thanks to @stephenmoloney for opening another avenue of communication since we werent' getting answers here....so it turns out you have to use DIFFERENT SYNTAX for stage conditions vs job conditions..there's no syntax that works for both at time of writing:

stage condition use: [stageDependencies|dependencies].<stage>.outputs['<job>.<step>.<var>']
job condition use: stageDependencies.<stage>.<job>.outputs['<step>.<var>']

yup you can interchange dependencies and stageDependencies in stage condition, but not in job conditions..

the support people QA for this only tested 1 of the conditions, ne and since the syntax defaults to null NE always came back true. You have to test the 4 conditions eq->true eq->false ne->true ne->false to be sure you're covering your bases. what a pain!

use this to test job conditions, only job with name ending in true should run...but only one syntax passes

pool:
  vmImage: 'ubuntu-latest'

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=valtrue;isOutput=true]true"
       name: stagevar

- stage: stageDependenciesWithDocSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

- stage: dependenciesWithDocSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(dependencies.A.A1.outputs['stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(dependencies.A.A1.outputs['stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(dependencies.A.A1.outputs['stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(dependencies.A.A1.outputs['stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

- stage: stageDependenciesWithTicketSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

and here's the stages run proof, again only stages with name run should work, but 2 of the syntaxes work:

pool:
  vmImage: 'ubuntu-latest'

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=valno;isOutput=true]no"
       name: printvar


- stage: depNETrue
  condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run



- stage: depEQFalse
  condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: should skip


- stage: depNEFalse
  condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: depEQTrue
  condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run


## Below here try to use stage stageDependencies

- stage: stgdepNETrue
  condition: and(succeeded(), ne(stageDependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run


- stage: stgdepEQFalse
  condition: and(succeeded(), eq(stageDependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: stgdepNEFalse
  condition: and(succeeded(), ne(stageDependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: stgdepEQTrue
  condition: and(succeeded(), eq(stageDependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

#below here they use the documented syntax from https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#dependencies
#but it does not work at all...it's always skipped

- stage: ShouldRunDependenciesEQDocumentedSyntax
  condition: and(succeeded(), eq(dependencies.A.A1.outputs['printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

- stage: ShouldSkipDependenciesEQDocumentedSyntax
  condition: and(succeeded(), eq(dependencies.A.A1.outputs['printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip

- stage: ShouldRunStgdepEQDocumentedSyntax
  condition: and(succeeded(), eq(stageDependencies.A.A1.outputs['printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

- stage: ShouldSkipStgdepEQDocumentedSyntax
  condition: and(succeeded(), eq(stageDependencies.A.A1.outputs['printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip

so yay there's a way, but booooo why does it have to be so hard to use?

I'm looking to feed the outputs of an agent phase/job into another agent phase/job within the same environment/stage so that I can have a dynamically set multiplier variable. Right now there is _no_ way to programmatically set a multiplier for multi-configuration deployments.

We maintain several "evergreen" configurations of our software for testing purposes, each deployed as a multiplied job. This means that adding or retiring an example config requires changing a release variable.

I have the exact same problem. I cannot access variables from a deployment job. Not only across stages but also accessing a variable in a dependent job seem to not work.

@ drdamour

thanks to @stephenmoloney for opening another avenue of communication since we werent' getting answers here....so it turns out you have to use DIFFERENT SYNTAX for stage conditions vs job conditions..there's no syntax that works for both at time of writing:

stage condition use: [stageDependencies|dependencies].<stage>.outputs['<job>.<step>.<var>']
job condition use: stageDependencies.<stage>.<job>.outputs['<step>.<var>']

yup you can interchange dependencies and stageDependencies in stage condition, but not in job conditions..

the support people QA for this only tested 1 of the conditions, ne and since the syntax defaults to null NE always came back true. You have to test the 4 conditions eq->true eq->false ne->true ne->false to be sure you're covering your bases. what a pain!

use this to test job conditions, only job with name ending in true should run...but only one syntax passes

pool:
  vmImage: 'ubuntu-latest'

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=valtrue;isOutput=true]true"
       name: stagevar

- stage: stageDependenciesWithDocSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

- stage: dependenciesWithDocSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(dependencies.A.A1.outputs['stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(dependencies.A.A1.outputs['stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(dependencies.A.A1.outputs['stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(dependencies.A.A1.outputs['stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

- stage: stageDependenciesWithTicketSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

and here's the stages run proof, again only stages with name run should work, but 2 of the syntaxes work:

pool:
  vmImage: 'ubuntu-latest'

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=valno;isOutput=true]no"
       name: printvar


- stage: depNETrue
  condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run



- stage: depEQFalse
  condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: should skip


- stage: depNEFalse
  condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: depEQTrue
  condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run


## Below here try to use stage stageDependencies

- stage: stgdepNETrue
  condition: and(succeeded(), ne(stageDependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run


- stage: stgdepEQFalse
  condition: and(succeeded(), eq(stageDependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: stgdepNEFalse
  condition: and(succeeded(), ne(stageDependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: stgdepEQTrue
  condition: and(succeeded(), eq(stageDependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

#below here they use the documented syntax from https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#dependencies
#but it does not work at all...it's always skipped

- stage: ShouldRunDependenciesEQDocumentedSyntax
  condition: and(succeeded(), eq(dependencies.A.A1.outputs['printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

- stage: ShouldSkipDependenciesEQDocumentedSyntax
  condition: and(succeeded(), eq(dependencies.A.A1.outputs['printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip

- stage: ShouldRunStgdepEQDocumentedSyntax
  condition: and(succeeded(), eq(stageDependencies.A.A1.outputs['printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

- stage: ShouldSkipStgdepEQDocumentedSyntax
  condition: and(succeeded(), eq(stageDependencies.A.A1.outputs['printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip

so yay there's a way, but booooo why does it have to be so hard to use?

You saved so much of time with the above post.. thank you, so much

Update 24/06/2020

It's possible now to use conditions from stages but there is quite a bit of exactness required and in some cases, it's not 100% intuitive.

@drdamour addressed this really well in this post and I'd highly recommend reading it. The post tediously goes through what works and what doesn't work.

The use of conditions could do with being a bit more consistent and a bit more user friendly. The overall approach is more convoluted than it needs to be. Probably makes the documentation hard to write too!

thanks very very much @drdamour . This was driving me nuts, was about to give up and then found your post! Cheers

I'm looking to feed the outputs of an agent phase/job into another agent phase/job within the same environment/stage so that I can have a dynamically set multiplier variable. Right now there is _no_ way to programmatically set a multiplier for multi-configuration deployments.
We maintain several "evergreen" configurations of our software for testing purposes, each deployed as a multiplied job. This means that adding or retiring an example config requires changing a release variable.

I have the exact same problem. I cannot access variables from a deployment job. Not only across stages but also accessing a variable in a dependent job seem to not work.

I agree, this does not seem to work with Deployment Jobs. Does anyone know a fix for this?

For deployments there is a different syntax, see https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#support-for-output-variables.

I was however not able to get this running in a stage condition. It is working when using stageDependencies in a job in the next stage however. I used a bash Job as waorkaround. Run the job based on the inverse condition you want and then do an "exit 1". Not the best solution and probably not aplicable for everybody.

@sit-md, I tried adding conditionals around one of my stages, like this:

${{ if or(eq(parameters.SignTypeSelection, 'Real'), eq(variables['Build.Reason'], 'Schedule')) }}:
  - stage: symbol_archive

But I got this error:

A template expression is not allowed in this context.

Do you know of the right syntax to use to turn some stages entirely off based on a condition like what I have above?

thanks to @stephenmoloney for opening another avenue of communication since we werent' getting answers here....so it turns out you have to use DIFFERENT SYNTAX for stage conditions vs job conditions..there's no syntax that works for both at time of writing:

stage condition use: [stageDependencies|dependencies].<stage>.outputs['<job>.<step>.<var>']
job condition use: stageDependencies.<stage>.<job>.outputs['<step>.<var>']

yup you can interchange dependencies and stageDependencies in stage condition, but not in job conditions..

the support people QA for this only tested 1 of the conditions, ne and since the syntax defaults to null NE always came back true. You have to test the 4 conditions eq->true eq->false ne->true ne->false to be sure you're covering your bases. what a pain!

use this to test job conditions, only job with name ending in true should run...but only one syntax passes

pool:
  vmImage: 'ubuntu-latest'

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=valtrue;isOutput=true]true"
       name: stagevar

- stage: stageDependenciesWithDocSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(stageDependencies.A.A1.outputs['stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

- stage: dependenciesWithDocSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(dependencies.A.A1.outputs['stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(dependencies.A.A1.outputs['stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(dependencies.A.A1.outputs['stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(dependencies.A.A1.outputs['stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

- stage: stageDependenciesWithTicketSyntaxTest
  dependsOn: A
  jobs:
  - job: NoCondition
    steps:
    - script: echo should run

  - job: neFalse
    condition: ne(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'true') 
    steps:
     - script: echo should skip

  - job: neTrue
    condition: ne(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'false') 
    steps:
     - script: echo should run

  - job: eqTrue
    condition: eq(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'true')
    steps:
     - script: echo should run

  - job: eqFalse
    condition: eq(stageDependencies.A.outputs['A1.stagevar.valtrue'], 'false')
    steps:
     - script: echo should skip

and here's the stages run proof, again only stages with name run should work, but 2 of the syntaxes work:

pool:
  vmImage: 'ubuntu-latest'

stages:
- stage: A
  jobs:
  - job: A1
    steps:
     - script: echo "##vso[task.setvariable variable=valno;isOutput=true]no"
       name: printvar


- stage: depNETrue
  condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run



- stage: depEQFalse
  condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: should skip


- stage: depNEFalse
  condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: depEQTrue
  condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run


## Below here try to use stage stageDependencies

- stage: stgdepNETrue
  condition: and(succeeded(), ne(stageDependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run


- stage: stgdepEQFalse
  condition: and(succeeded(), eq(stageDependencies.A.outputs['A1.printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: stgdepNEFalse
  condition: and(succeeded(), ne(stageDependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip


- stage: stgdepEQTrue
  condition: and(succeeded(), eq(stageDependencies.A.outputs['A1.printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

#below here they use the documented syntax from https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#dependencies
#but it does not work at all...it's always skipped

- stage: ShouldRunDependenciesEQDocumentedSyntax
  condition: and(succeeded(), eq(dependencies.A.A1.outputs['printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

- stage: ShouldSkipDependenciesEQDocumentedSyntax
  condition: and(succeeded(), eq(dependencies.A.A1.outputs['printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip

- stage: ShouldRunStgdepEQDocumentedSyntax
  condition: and(succeeded(), eq(stageDependencies.A.A1.outputs['printvar.valno'], 'no'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should run

- stage: ShouldSkipStgdepEQDocumentedSyntax
  condition: and(succeeded(), eq(stageDependencies.A.A1.outputs['printvar.valno'], 'yes'))
  dependsOn: A
  jobs:
  - job: B1
    steps:
    - script: echo should skip

so yay there's a way, but booooo why does it have to be so hard to use?

After testing the whole thing, I have found out that there is an error in the initial script which makes us think we are validating the right value. See the line - script: echo "##vso[task.setvariable variable=valno;isOutput=true]no" does NOT save the value no, but instead no" (Double quote really important here). So when the stage is checking whether is should be ran or not, checking against no won't work.

Replacing the line with the following will make the check work:

  • powershell: Write-Host "##vso[task.setvariable variable=valno;isOutput=true]no"

Hope it helps.

Would like to mention that after 2 days of digging to through documentation and threads like this, I finally got this to work.
It is still very convoluted, and badly documented.
One of the fun parts was finding out you can't use (on stage level) a variable set from a output variable from a previous stage in a condition, and also the syntax to use an output variable from a previous stage for variables and conditions at stage level is _different_, which really just boggles the mind.

@cranphin i was able to make a stage conditional based on output of another stage's job

@cranphin i was able to make a stage conditional based on output of another stage's job

It's not that it's not possible, it's that it's badly implemented (unclear) and documented :)

I don't want to have the code the pipeline to have jobs wait that depend on using the same variable. I don't want "job B" to depend (and thus wait) for "job A" in the same pipeline to complete.

I want to be able to have 2 different jobs in the same pipeline (and I understand these would be running concurrently on 2 different agents) where both can read or set the value of a variable that's shared to make a decision in either job.

Hi everyone, this seems to be related to Azure DevOps itself, but this repo is more for pipeline tasks - could you please create ticket on https://developercommunity.visualstudio.com/spaces/21/index.html for documentation updates/any further improvements to get right eyes on it? Let me close this one since it's not related to pipeline tasks.

Was this page helpful?
0 / 5 - 0 ratings