Test-infra: jobs: remove jobs that have been continuously failing for over N days

Created on 27 Jul 2018  ยท  25Comments  ยท  Source: kubernetes/test-infra

I'm going to suggest N=120 for now. I'd like to do this one more time as a human to learn what the right detection methods are for straight-up-failing vs. seriously flaky, how/when to reach people in case there are some brave souls out there who want to save these jobs, etc.

Next step would be to automate at least some portion of this.

ref: https://github.com/kubernetes/test-infra/issues/2528#issuecomment-392936145

arejobs kincleanup lifecyclfrozen prioritimportant-soon sitesting

Most helpful comment

@BenTheElder: dog image

In response to this:

This is an excellent issue. ๐Ÿ‘Œ
/woof

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

All 25 comments

/milestone 1.12
/assign
/area jobs

This is an excellent issue. ๐Ÿ‘Œ
/woof

@BenTheElder: dog image

In response to this:

This is an excellent issue. ๐Ÿ‘Œ
/woof

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

remove means delete the job or just disable it? (leave it in the prow config, but disabled)

I'd say remove from everything and use git history, or create a seperate dir for dead jobs. Trimming the config is also desirable, besides saving on runtime resources, and on testgrid at least it doesn't make sense to keep showing jobs we aren't running.

Horrible bash + jq to get me the list and links to testgrid
https://gist.github.com/spiffxp/1e3ff608a92e8bfc0091a0b2918a11c6

/milestone v1.13
/priority important-soon
/sig testing
~/kind purge~
/kind cleanup

/assign
Deleting things is fun right? Let's get this in 1.13 :^) ๐Ÿ”ฅ

/milestone v1.14

For reals this time. Document and then enforce rules.

The grinch is here!

/milestone v1.15

/milestone v1.16
Many of the cos jobs that come from generate_tests may be able to be removed, although there is some question from @tpepper over whether those are legitimate failures

https://github.com/kubernetes/test-infra/issues/13353#issuecomment-509379536

Following up, if these can be green, they provide useful signal. Looking at the testgrid cos set of k8sbeta aka 1.15 things in particular in the hope that we might make things more green ahead of merging https://github.com/kubernetes/kubernetes/pull/80134 I see:

  • cos-cos1-k8sbeta (this is 1.15)

    • gce-cos1-k8sbeta-default

    • storage issue likely fixed by cherry pick of https://github.com/kubernetes/kubernetes/pull/78459

    • may still have an additional storage issue

    • (same as gce-cos2-k8sbeta-default below)

    • cos1-k8sbeta-serial

    • misc. timeouts/flakes

    • (maybe same as cos2-k8sbeta-serial below?)

    • gke-cos1-k8sbeta-alphafeatures

    • gke-cos1-k8sbeta-gpu

    • gke-cos1-k8sbeta-default

    • gke-cos1-k8sbeta-reboot

    • gke-cos1-k8sbeta-autoscaling

    • gke-cos1-k8sbeta-updown

    • gke-cos1-k8sbeta-serial

    • gke-cos1-k8sbeta-slow

    • ^^^ all showing GKE kube-apiserver endpoint unhealthy

    • gce-cos1-k8sbeta-slow

    • flex volume timeouts

    • (same as gce-cos2-k8sbeta-slow below)

    • cos1-k8sbeta-gkespec

    • before suite system validation fails misc kernel CONFIG_ checks

    • (same as cos2-k8sbeta-gkespec below)

    • gce-cos1-k8sbeta-serial

    • probably also https://github.com/kubernetes/kubernetes/pull/78459

    • gke-cos1-k8sbeta-flaky

    • named "flaky" but 100% fails to bring up a cluster, less clear logs

      than other cases

  • cos-cos2-k8sbeta (this is 1.15)

    • gke-cos2-k8sbeta-slow

    • gke-cos2-k8sbeta-autoscaling

    • gke-cos2-k8sbeta-serial

    • gke-cos2-k8sbeta-reboot

    • gke-cos2-k8sbeta-updown

    • gke-cos2-k8sbeta-default

    • gke-cos2-k8sbeta-gpu

    • gke-cos2-k8sbeta-flakyA

    • gke-cos2-k8sbeta-alphafeatures

    • ^^^ all showing GCE error: "A Shielded VM Config cannot be set when using a source

      image that is not UEFI-compatible."

    • gce-cos2-k8sbeta-serial

    • "FailedMount: MountVolume.SetUp" timeout: multiple cloud provider

      specific issues in this area of code opened this spring.

    • gce-cos2-k8sbeta-default

    • storage issue likely fixed by cherry pick of https://github.com/kubernetes/kubernetes/pull/78459

    • may still have an additional storage issue

    • (same as gce-cos1-k8sbeta-default above)

    • cos2-k8sbeta-gkespec

    • before suite system validation fails misc kernel CONFIG_ checks

    • (same as cos1-k8sbeta-gkespec above)

    • cos2-k8sbeta-serial

    • misc. timeouts/flakes storage

    • (maybe same as cos1-k8sbeta-serial above?)

    • gce-cos2-k8sbeta-slow

    • flex volume timeouts

    • (same as gce-cos1-k8sbeta-slow above)

There's a relatively limited set of symptoms there...

I confirmed the cosN jobs aren't really relevant to the community (they are more about testing different versions of an underlying image that is GCP-specific, rather than k8s itself), and there are equivalents running inside of google, so think we should remove them https://github.com/kubernetes/test-infra/pull/13913

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

this probably should still be open?

/reopen
/remove-lifecycle rotten
/lifecycle frozen

@spiffxp: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten
/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Well, I managed to remove an N=923 job https://github.com/kubernetes/test-infra/pull/17156

๐Ÿ‘๐Ÿ‘

On Thu, Apr 9, 2020 at 8:33 PM Aaron Crickenberger notifications@github.com
wrote:

Well, I managed to remove an N=923 job #17156
https://github.com/kubernetes/test-infra/pull/17156

โ€”
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/8861#issuecomment-611861268,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADK5CPRCN2XHYK5X6ZD3RL2HQZANCNFSM4FMLCVBQ
.

@spiffxp: Closing this issue.

In response to this:

/close
closing this in favor of https://github.com/kubernetes/test-infra/issues/18600

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings