What happened:
We attempted to /assign someone to a PR that already had 10 assignees. GitHub has a limit of 10 assignees per issue/pr
Prow's message suggested the target assignee wasn't a member or collaborator.
What you expected to happen:
Prow should indicate that it can't assign anymore users to an issue that already has 10 assignees. Either check this up front, or perhaps it's an error that we can catch and propagate back. This would allow the human to decide if someone else should first be /unassign'ed
How to reproduce it (as minimally and precisely as possible):
Assign 10 people to an issue. Then try to assign one more person (who would normally be assignable)
Please provide links to example occurrences, if any:
https://github.com/kubernetes/website/pull/11060#issuecomment-445386231
Anything else we need to know?:
This is hilarious
/kind bug
/area prow
/assign @spxtr @spiffxp @cjwagner @fejta @krzyzacy @BenTheElder @ixdy @k8s-ci-robot @stevekuznetsov @mithrav @michelle192837
@ixdy: GitHub didn't allow me to assign the following users: stevekuznetsov.
Note that only kubernetes members and repo collaborators can be assigned.
For more information please see the contributor guide
In response to this:
/assign @spxtr @spiffxp @cjwagner @fejta @krzyzacy @BenTheElder @ixdy @k8s-ci-robot @stevekuznetsov @mithrav @michelle192837
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/unassign @spxtr @spiffxp @cjwagner @fejta @krzyzacy @BenTheElder @ixdy @k8s-ci-robot @stevekuznetsov @mithrav @michelle192837
/unassign @spxtr @spiffxp @cjwagner @fejta @krzyzacy @BenTheElder @ixdy @k8s-ci-robot @stevekuznetsov @mithrav @michelle192837
Can issues only have 10 assignees? Or can we only add 10 more at a time?
Looks like I lucked out...
That error message is bad in general - the message doesn't accurately reflect all of the different checks we do before assignment or the requirements that GitHub has for allowing someone to be assigned
Can issues only have 10 assignees? Or can we only add 10 more at a time?
The GitHub docs say:
Adds up to 10 assignees to an issue. Users already assigned to an issue are not replaced.
So, I think we can _add_ only 10 at a time, but an issue can have more than 10 assignees.
/assign
I'm leading a pair programming table at the diversity lunch today at KubeCon and I'll help someone work on this (and some other tasks) + give them a small codebase tour of other prow plugins. Maybe this could even be a first contribution for someone :smile:
If a PR does not end up stemming from the pair programming session, I'll follow up myself.
Hello all, I had made a commit to fix that issue. But weirdly, this following test passes :
{
name: "assign >10 users",
body: "/assign @user1 @user2 @user3 @user4 @user5 @user6 @user7 @user8 @user9 @user10 @user11 @user12 @user13 @user14 @user15 @user16 @user17 @user18 @user19",
commenter: "rando",
commented: false,
assigned: []string{"user1", "user2", "user3", "user4", "user5", "user6", "user7", "user8", "user9", "user10", "user11", "user12", "user13", "user14", "user15", "user16", "user17", "user18", "user19"},
},
@kanishkarj Thanks for following up on our conversation about contributing! :)
I think you'll also need to update the AssignIssue method for the fake client to make sure that it returns an error with >10 assignees, just like GitHub does.
awesome, shall look into it.
I have updated the tests, go through them, please.
Hello?
@kanishkarj your PR merged about a month ago, what were you looking for?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Fixed in https://github.com/kubernetes/test-infra/pull/10554
/close
@nikhita: Closing this issue.
In response to this:
Fixed in https://github.com/kubernetes/test-infra/pull/10554
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@kanishkarj Thanks for following up on our conversation about contributing! :)
I think you'll also need to update the
AssignIssuemethod for the fake client to make sure that it returns an error with >10 assignees, just like GitHub does.https://github.com/kubernetes/test-infra/blob/ded97d9c925c70a79a731bd2b125c0d28b5c867f/prow/plugins/assign/assign_test.go#L46-L60