Users can decide to either run or skip tasks that are dependent on the task that was skipped due to a condition check failure.
Tasks that are dependent on a skipped task are always skipped
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: two-tasks-one-condition
spec:
tasks:
- name: conditional-task
taskRef:
name: task1
conditions:
- conditionRef: always-fail
- name: run-if-1-or-1-skipped
taskRef:
name: task2
runAfter:
- conditional-task
One thought after looking at #1137 :
from a Task that has a failing condition, I'm not sure it makes any sense to run that dependent Task: e.g. if a Task is building an image as an output, and another task wants to get the built image (digest) from that Task, it wouldn't make sense to run it anywayrunAfter is fuzzier - if it's just a matter of ordering, it feels like the user could potentially choose whether the tasks "after" should be skipped or notFrom @afrittoli in #1264:
In the example pipeline I used runAfter but I believe the behaviour would be the same if the dependency was introduced using from on a resource.
If a Task is getting a resource
froma Task that has a failing condition, I'm not sure it makes any sense to run that dependent Task: e.g. if a Task is building an image as an output, and another task wants to get the built image (digest)fromthat Task, it wouldn't make sense to run it anyway
I'm not sure we have a good solution for this case specifically. I've been trying to build a pipeline that builds a docker image only if needed - which I encoded in a condition. The build task has the condition attached and the next Task deploys a cloud function that uses that image plus some code on top.
As long as I don't rely on the image digest, and I use a tag e.g. "latest", the second task can run
even if the first one did not.
/kind feature
The proposal in #1684 has a solution to this using the runOn syntax. So, if a task's condition failed its state is "skip":
# 1. change execution graph i.e. onSuccess do something onFailure do something else:
- name: task1
conditions:
conditionRef: "condition-that-sometime-fails"
taskRef: { name: "my-task" }
- name: runIfTask1Fails
runAfter: task1
runOn: ["failure"]
- name: runIfTask1Succeeds
runAfter: task1
runOn: ["success"]
- name: runIfTask1IsSkipped
runAfter: task1
runOn: ["skip"]
What happens though in this use case? Is runIfTask2IsSkipped executed when task1 is skipped because its condition fails?
- name: task1
conditions:
conditionRef: "condition-that-sometime-fails"
taskRef: { name: "my-task" }
- name: runIfTask1IsSkipped
runAfter: task1
runOn: ["skip"]
- name: task2
runAfter: task1
conditions:
conditionRef: "condition-that-sometime-fails"
taskRef: { name: "my-task2" }
- name: runIfTask2IsSkipped
runAfter: task2
runOn: ["skip"]
What I want to say is that a runOn: ["skip"] would cover more cases than an else logic, so we still would need a way to specify that we only want the else in some cases.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/remove-lifecycle rotten
/assign jerop
@jerop is actually taking care of this via https://github.com/tektoncd/community/blob/master/teps/0007-conditions-beta.md#skipping-1
@bobcatfish: Reopened this issue.
In response to this:
/reopen
/remove-lifecycle rotten/assign jerop
@jerop is actually taking care of this via https://github.com/tektoncd/community/blob/master/teps/0007-conditions-beta.md#skipping-1
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
open pr: https://github.com/tektoncd/pipeline/pull/3176
/remove-lifecycle rotten
@jerop I really could use the feature you implemented in the PR. is there any ETA when this will be merged?
If I can help with anything to speed things up, I can always take a look.
My use case:
@Bl4d3s we paused that work for a bit as we discussed some design choices and alternatives in https://github.com/tektoncd/pipeline/issues/3345 -- rebasing the pr now ~and planning to get it in before the v0.18 release~ (revisited syntax in https://github.com/tektoncd/community/pull/258)
@Bl4d3s we paused that work for a bit as we discussed some design choices and alternatives in #3345 -- rebasing the pr now and planning to get it in before the v0.18 release
I see the PR is already approved, but needs a rebase. Any hope to get it in with the next release?
I see the PR is already approved, but needs a rebase. Any hope to get it in with the next release?
@Bl4d3s we revisited the skipping syntax in this TEP to clarify that the skipping option is related to WhenExpressions -- thank you for your patience as we work to get it right and get it in soon
Most helpful comment
@Bl4d3s we paused that work for a bit as we discussed some design choices and alternatives in https://github.com/tektoncd/pipeline/issues/3345 -- rebasing the pr now ~and planning to get it in before the v0.18 release~ (revisited syntax in https://github.com/tektoncd/community/pull/258)