When using the prow badge (e.g. https://prow.knative.dev/badge.svg?jobs=pull-knative-build-pipeline-unit-tests) to display the status of a PR job, it should be green when the latest run is successful, and red if it is not successful.
In https://github.com/knative/build-pipeline we are using badges to show the Prow CI status, unfortunately these are on PR (not continuous integration) workflows, so I can understand that a failing PR might make the badge turn red, but currently the badge is red and that doesn't seem to be the case:

When I look at the prow status for that job (https://prow.knative.dev/?job=pull-knative-build-pipeline-build-tests), the latest status is green (though there have been some reds):

/area prow/deck
/kind bug
/help
the code is here: https://github.com/kubernetes/test-infra/blob/master/prow/cmd/deck/badge.go
something needs fixing with that computation :^)
there doesn't seem to be any range limit to "Recent" in this for what is failing, in the meantime a testgrid badge should work (See the testgrid docs)
I can work on this. I have been staring at this code for a while and I think the issue is we are not checking if the recent job has been successful in line 124 of badge .go.
The question is should we even check the length if the recent job is successful? Assuming we turn the badge to red or green only according to the latest job but not per the last 8 runs.
So the badge is supposed to support multiple jobs, which is why some sort of failure length check makes sense, but we probably should just use the most recent of each job
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
is this being looking into? otherwise I may want to take it :)
Go ahead and take it!
Yes take it :)
/lifecycle active
Sorry for the delay, I have been out in vacation. 🎉
I would like to propose an approach here before doing all of the coding..
I am thinking about adding another function in the badge.go file to have a sort of struct including all the latest 8 jobs and their status. Based on these status the func will have a boolean output then I can add it in the if statement (L124) in the renderBadge func. WDYT?
I would be very happy to have your inputs and if there is any suggestion how we could do this in a better way.
How do you propose to take the last eight jobs and determine the boolean output? The approach roughly sounds good to me!
/remove-help
I hit this recently, it doesn't look like the results are ordered at all ...?
@BenTheElder Is this issue fixed? If yes, I should upgrade to the latest build.
I'm not sure, currently I'm not using badges.
On Mon, Oct 28, 2019, 02:15 Morven Cao notifications@github.com wrote:
@BenTheElder https://github.com/BenTheElder Is this issue fixed? If
yes, I should upgrade to the latest build.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/9628?email_source=notifications&email_token=AAHADK2VWQJHLHL5F52SGHDQQ2UTZA5CNFSM4FYAZTTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECMGIKQ#issuecomment-546858026,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADKYUBUCK4RPT7DELHXDQQ2UTZANCNFSM4FYAZTTA
.
I'm using the build version of v20190918-7672de02b, look like the issue still exists.
/assign
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
@nzoueidi ping?
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.