The GCS paths linked to by Prow are not consistent with the actual upload paths used by bootstrap for postsubmit jobs in repos other than k/k. Specifically, it seems that Prow includes the repo name in the path while bootstrap does not. For example:
actual upload path:
https://storage.googleapis.com/kubernetes-jenkins/logs/ci-test-infra-bazel/64888/build-log.txt
expected gubernator link:
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-test-infra-bazel/64888
prow provided gubernator link:
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/test-infra/ci-test-infra-bazel/64888/
This appears to affect all postsubmit jobs except for those in k/k.
https://prow.k8s.io/?type=postsubmit
cc @BenTheElder @rmmh @krzyzacy
/assign
/area prow
/area bootstrap
/kind bug
IIRC the rules are set up this way for kubernetes repository:
kubernetes/kubernetes, use build/<job_name>/<job_number>kubernetes org, use build/<repo_name>/<job_name>/<job_number>kubernetes org, use build/<org_name>_<repo_name>/<job_name>/<job_number>So the provided link in Prow is correct in your example and the "expected" link is not.
Also if I may .... there are a large number of other places where it is assumed that jobs will have globally unique names, so it's also totally unclear to me why we even have this structure as it adds basically useless namespacing.
We have this structure because we have this structure :grimacing: /s
Postsubmits should upload to a path with the repo name it it for now to be consistent, I'm not sure why they aren't right now.
We have this structure because (at the time, and still?) jobs didn't embed their repo names in started or finished.json, so we couldn't link to PRs or commits properly.
Yeah at this point we should move to publishing $JOB_SPEC JSON to the GCS bucket so we have the information we need.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
bootstrap.py dumps the podspec now which has been very useful occasionally
https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/64560/pull-kubernetes-bazel-build/45263/artifacts/prow_podspec.yaml
On Thu, May 31, 2018, 10:23 AM Steve Kuznetsov notifications@github.com
wrote:
/remove-lifecycle stale
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/7032#issuecomment-393608726,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4BqwEx3vTHLAOcTzphATDxGRzus9Clks5t4Cb-gaJpZM4SVl1c
.
^ That would be even more useful for the pod utilities to do.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
This may be addressed by #9211 but only for decorated jobs :|
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Yeah at this point we should move to publishing
$JOB_SPECJSON to the GCS bucket so we have the information we need.