"yaml tests" should only fail if something is actually wrong
All of the runs for #2531 have failed:

And recent runs across PRs it seems most are failing too
https://tekton-releases.appspot.com/builds/tekton-prow/pr-logs/directory/pull-tekton-pipeline-integration-tests
Not sure what's going on yet
I can't decipher 1..60 sleep 10 for the life of me:
I added example pipelinerun-with-parallel-tasks-using-pvc.yaml in #2521, few days ago.
Things looks worse after that. But I don't really understand what is causing that. Maybe volumes takes time and there are some time outs? I find it hard to see what test is causing trouble.
I added example pipelinerun-with-parallel-tasks-using-pvc.yaml in #2521, few days ago.
Things looks worse after that. But I don't really understand what is causing that. Maybe volumes takes time and there are some time outs? I find it hard to see what test is causing trouble.
Yeah, that's my guess :sweat:
I can't decipher 1..60 sleep 10 for the life of me:
it's gonna do 60 loops of 10s to check the status of the pipelinerun (or taskrun), meaning it times out after 10min.
/kind bug
/area testing
There is few ways to fix:
I've bump the timeout in #2534 (90 loops instead of 60). It should fix the CI while #2541 gets worked on.
@vdemeester did it work better?
If it is a _regional cluster_ and the PVCs are _zonal_ the two parallel tasks may be executing in different zones and the third task that mount both PVCs is deadlocked since it can't mount two _zonal_ PVC in a pod. I propose that I remove the example, since it depends so much on what kind of storage and cluster that is used. The intentation was to document PVC _access modes_ but it is not strictly necessary to have an example.
@vdemeester did it work better?
Not entirely sure. There is less failures but I see some still.
If it is a _regional cluster_ and the PVCs are _zonal_ the two parallel tasks may be executing in different zones and the third task that mount both PVCs is deadlocked since it can't mount two _zonal_ PVC in a pod. I propose that I remove the example, since it depends so much on what kind of storage and cluster that is used. The intentation was to document PVC _access modes_ but it is not strictly necessary to have an example.
Yeah, having it in a no-ci folder would work
It does appear this might have been related. Just spotted this in one of our release clusters:

And drilling down it does appear to be related to volume / node affinity.
@sbwsg thanks. It was exactly that task I was worried about. But that example does not provide much value, and it need to be adapted to any environment. So I think it is best to remove it.
But a similar problem may occur for other pipelines that use the same PVC in more than one task. We could move those to the no-ci folder as @vdemeester suggested.
I apologize for the flaky tests the last few days.
But a similar problem may occur for other pipelines that use the same PVC in more than one.
Yeah this might be a good area we can add docs around at some point. I wonder how much of it is platform specific and how much Tekton can describe in a cross-platform way.
I apologize for the flaky tests the last few days.
No worries, thanks for making the PR to resolve, and all the contributions around Workspaces! We were bound to hit this issue eventually.
I am curious if we can use some kind of pod affinity to get tasks co-located on the same node.
Possibly co-locate all pods belonging to a single PipelineRun so they perfectly fine can use the same PVC as a workspace and perfectly fine can execute parallel. (this is essentially what any single-node CI/CD system does).
We would still be a distributed system where different PipelineRuns possibly scheduled to different nodes. Using different PVCs is "easier" for fan-out, but not for fan-in (e.g. git-clone and then parallel tasks using the same files)
I don't think we've seen any evidence of this since @jlpettersson 's fixes, closing!
Most helpful comment
@sbwsg thanks. It was exactly that task I was worried about. But that example does not provide much value, and it need to be adapted to any environment. So I think it is best to remove it.
But a similar problem may occur for other pipelines that use the same PVC in more than one task. We could move those to the
no-cifolder as @vdemeester suggested.I apologize for the flaky tests the last few days.