/help
/good-first-issue
/area prow/hook
/area prow
We sometimes see errors like this:
{
insertId: "1qzmc7eg10yhd2q"
jsonPayload: {
author: "ddebroy"
component: "hook"
event-GUID: "2d131c80-eba1-11e9-84a3-2526b95c6daa"
event-type: "pull_request"
file: "prow/plugins/owners-label/owners-label.go:119"
func: "k8s.io/test-infra/prow/plugins/owners-label.handle"
level: "warning"
msg: "Unable to add nonexistent labels: ["sig/gcp"]"
org: "kubernetes"
plugin: "owners-label"
pr: 83098
repo: "kubernetes"
url: "https://github.com/kubernetes/kubernetes/pull/83098"
}
labels: {…}
logName: "projects/k8s-prow/logs/hook"
receiveTimestamp: "2019-10-10T21:03:19.691649425Z"
resource: {…}
severity: "ERROR"
timestamp: "2019-10-10T21:03:18Z"
}
Given the size of https://github.com/kubernetes/kubernetes/pull/83098 it is hard to figure out why this PR is trying to add a sig/gcp label. This is almost certainly because one or more OWNERS files in the kubernetes/kubernetes repo have a line like this:
It would be very helpful if we updated the error message:
To include the files that added the label:
@fejta:
This request has been marked as suitable for new contributors.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/help
/good-first-issue
/area prow/hook
/area prowWe sometimes see errors like this:
{ insertId: "1qzmc7eg10yhd2q" jsonPayload: { author: "ddebroy" component: "hook" event-GUID: "2d131c80-eba1-11e9-84a3-2526b95c6daa" event-type: "pull_request" file: "prow/plugins/owners-label/owners-label.go:119" func: "k8s.io/test-infra/prow/plugins/owners-label.handle" level: "warning" msg: "Unable to add nonexistent labels: ["sig/gcp"]" org: "kubernetes" plugin: "owners-label" pr: 83098 repo: "kubernetes" url: "https://github.com/kubernetes/kubernetes/pull/83098" } labels: {…} logName: "projects/k8s-prow/logs/hook" receiveTimestamp: "2019-10-10T21:03:19.691649425Z" resource: {…} severity: "ERROR" timestamp: "2019-10-10T21:03:18Z" }Given the size of https://github.com/kubernetes/kubernetes/pull/83098 it is hard to figure out why this PR is trying to add a
sig/gcplabel. This is almost certainly because one or moreOWNERSfiles in the kubernetes/kubernetes repo have a line like this:It would be very helpful if we updated the error message:
To include the files that added the label:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/kind oncall-hotlist
/assign
How are things going? Reach out to #prow on slack if you want some help!
I will send PR by tomorrow, busy with university exams.
On Wed, 16 Oct, 2019, 02:33 Erick Fejta, notifications@github.com wrote:
How are things going? Reach out to #prow on slack if you want some help!
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/14718?email_source=notifications&email_token=AM6PS4SNUTEGV6WLN2JSYTTQOYV2XA5CNFSM4I7SP7A2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBKGZ7A#issuecomment-542403836,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AM6PS4RP2FGPWA232KHQURTQOYV2XANCNFSM4I7SP7AQ
.
Good luck :pray: :prayer_beads:
Any progress here?
Suspect this could still use some help!
/remove-kind oncall-hotlist
Hi @fejta I'll take a look at this today and reach out on slack if I need help
/assign
I'm working on writing a failing test for this scenario.
I have a first pass implementation done on this but I have yet to stub out the Error for test coverage.
@RobertKielty are you still working on this issue? I'm interested in helping out as well.
I have some work done on this which I could share as work in progress.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
/assign