As a Features shadow for 1.11 and Features Lead for 1.12, I'm noticing that there's a fair amount of day-to-day nagging / cat-herding that is done, that could probably be relegated to automation.
Similar to the milestone functionality, I'd like a means of automatically reminding Feature Owners that their issues require:
Some of this seems to overlap with the functions of other plugins, so I'd like to get a convo going about what prior art we could reuse to help implement this.
@kubernetes/sig-testing-feature-requests
/help
We have been conservative in automating nagging and cat-herding as historically the cost of the noise it generates has outweighed the (usually small) benefits.
In particular we find people tend to pay substantially less attention to nagging / cat-herding notices when they appear to come from a bot rather than a human being.
On the other hand the automatic /retest automation seems to be well-received.
If you want to experiment with this sort of thing I would suggest picking a small subset of this task -- such as issues that have no sig -- and follow the format of the periodic-test-infra-retester mentioned above or periodic-test-infra-stale.
Both of these use the commenter to --query for issues that match some condition and then leave a --comment.
xref: https://github.com/kubernetes/test-infra/issues/8205 for a discussion I started around bot nagging.
Perhaps instead of an automated nag, we could create a dashboard for release team members to be able to have a curated list of PRs to nag in, so that a person still would do the work for contacting developers but the tooling could help determine where they spend their efforts? Are existing search functions lacking? Could you see that as a useful middle ground?
@fejta -- thanks for the pointers. I agree that the bot nagging can be a bit much, but this is also necessary to make sure Features are properly categorized before being forklifted into the Features Tracking sheet. That is currently a manual process. :'(
To @stevekuznetsov 's suggestion (and my personal machinations), I would like to see a dashboard that doesn't require manipulation by human(s) and is instead generated from data scrubbed across GH labels.
Do we have a good example of a dashboard similar to this idea, serving some other function already?
It would be great to have something that acted a central ingress point for anyone on the Release Team to get a quick overview of Feature / Docs status + SIG ownership + maybe failing tests.
(^^ I was planning on opening an issue on that once I gathered my thoughts on design.)
There exist dashboards that are driven by GitHub data, but I'm not sure they are close enough prior art to be of huge use in building a new one.
Gotcha, @stevekuznetsov.
Thanks everyone for the super quick feedback! :)
/assign
/remove-help
Features automation also mentioned here by @liggitt:
Can sig-release/sig-pm look into pulling snapshot reports of milestone items from the features repo automatically? A bot that scrapes the milestone items daily/weekly/whenever and snapshots that as a committed md file would stay up to date and show change over time.
FWIW the PR Dashboard allows any GitHub query so for instance you could search for v1.12 milestone PRs and see the diff between their current state and what they need to merge. It's not super useful today as the PR Dashboard works with the tide merge automation, which has not yet been rolled out to kubernetes/kubernetes. Once that happens, that page can be part of the solution here, perhaps.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
We have been conservative in automating nagging and cat-herding as historically the cost of the noise it generates has outweighed the (usually small) benefits.
In particular we find people tend to pay substantially less attention to nagging / cat-herding notices when they appear to come from a bot rather than a human being.
On the other hand the automatic
/retestautomation seems to be well-received.If you want to experiment with this sort of thing I would suggest picking a small subset of this task -- such as issues that have no sig -- and follow the format of the periodic-test-infra-retester mentioned above or periodic-test-infra-stale.
Both of these use the commenter to
--queryfor issues that match some condition and then leave a--comment.