TestTaskRunRetry is flaky.
I run into this several times for a PR today and for another PR yesterday.
examples:
/kind flake
/assign @thomaschandler
@jlpettersson I wonder if this might be related to the garbage collection of failed pods. I'm unsure if GKE has a different garbage collection process than a self hosted k8s setup.
I'm thinking of updating TestTaskRunRetry to wait for pod state changes, at which point it will record the pod information for the failed pod, which can later be asserted once the PipelineRun is complete. Think this is worth a shot?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Send feedback to tektoncd/plumbing.
This is happening again/still: https://tekton-releases.appspot.com/build/tekton-prow/pr-logs/pull/tektoncd_pipeline/3494/pull-tekton-pipeline-integration-tests/1324621677065146370/
TestTaskRunRetry: retry_test.go:119: BUG: Found 5 Pods, want 6
/remove-lifecycle stale