manually deleted ~100, should be good for a few days, but we need to fix the leak?
assign to a few aws experts
/assign @zmerlynn @justinsb @chrislovecnm
So I think this happens when a PR is revised quickly:
https://k8s-gubernator.appspot.com/pr/62308
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/62308/pull-kubernetes-e2e-kops-aws/83531/
https://k8s-gubernator.appspot.com/pr/62496
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/62496/pull-kubernetes-e2e-kops-aws/83735/
Not sure exactly what is happening to the job when a revised PR is pushed quickly.
In any case, I'll look into adding it to the AWS janitor to catch these leaks.
probably this is also part of https://github.com/kubernetes/test-infra/issues/7673
This one of our top flakes for last week : http://velodrome.k8s.io/dashboard/db/bigquery-metrics?orgId=1
/priority failing-test
/priority important-soon
/kind flake
/sig cli
/sig testing
/sig aws
Curious why sig-cli got tagged for this
This doesn't seems to be a sig cli issue.
/remove sig cli
/remove-sig cli
should be fixed with https://github.com/kubernetes/test-infra/pull/7740
Most helpful comment
/remove-sig cli