The following owners files in sigs.yaml are generating 404 errors:
sig-apps
https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/core/v1/OWNERS
sig-auth
https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/certificates/approver/OWNERS
https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/imagepolicy/OWNERS
sig-autoscaling
https://raw.githubusercontent.com/kubernetes/client-go/master/scale/OWNERS
sig-storage
https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/csi-api/OWNERS
sig-api-machinery (pull request fix pending #4124)
https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/quota/OWNERS
I wasn't positive which owners files should replace most of these, so if someone could track them down and update sigs.yaml, that would be great. For the sig-api-machinery quota file, I think I found the correct owners file and created pull request #4124 to update just that one owners file in sigs.yaml
/sig apps
/sig storage
/sig autoscaling
/sig auth
@geekygirldawn: The label(s) sig/, sig/, sig/ cannot be applied. These labels are supported: api-review, community/discussion, community/maintenance, community/question, cuj/build-train-deploy, cuj/multi-user, platform/aws, platform/azure, platform/gcp, platform/minikube, platform/other
In response to this:
/sig apps
/sig storage
/sig autoscaling
/sig auth
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/area community-management
Thanks for auditing these, @geekygirldawn :)
if someone could track them down and update sigs.yaml, that would be great.
/help
For anyone who takes it up:
Most of the work involves tracking the OWNERS files and following up with SIGs. One suggestion for tracking these OWNERS files is to do a git log -- <file-name> to get the list of commits that touched the file. The latest commit would have removed the file and would contain more context about the change.
If you have any questions, please feel free to ask in #sig-contribex on the k8s slack. :rainbow:
@nikhita:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
Thanks for auditing these, @geekygirldawn :)
if someone could track them down and update sigs.yaml, that would be great.
/help
For anyone who takes it up:
Most of the work involves tracking the OWNERS files and following up with SIGs. One suggestion for tracking these OWNERS files is to do a
git log -- <file-name>to get the list of commits that touched the file. The latest commit would have removed the file and would contain more context about the change.If you have any questions, please feel free to ask questions in #sig-contribex on the k8s slack. :rainbow:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Thanks @geekygirldawn -- Longer term I think this is something we should look into automate a bit. Walking sigs.yaml and checking for valid OWNERS shouldn't be too bad. Maybe schedule a report to run once every release? If there are any out of date, create a follow-up issue after the release.
Agreed! I was doing a little bit of analysis of owners files, and one of my scripts returned errors on these files, so it should be easy enough to automate :)
After a bit more digging, I think https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/csi-api/OWNERS was deleted as part of commit d2aa8178f2450b75f75acc2ed8f0a09119a6d9d3
Thu Mar 21 13:19:14 2019 -0700 Remove alpha CRD install.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
It doesn't solve the problem, but I do plan to at least audit the ones in place this cycle.
Looping in steering for evaluation with the sig/wg checkins.
/committee steering
/assign mrbobbytables
/milestone v1.20
https://github.com/kubernetes/community/issues/5425 would be a good opportunity to catch these
Most helpful comment
Thanks @geekygirldawn -- Longer term I think this is something we should look into automate a bit. Walking sigs.yaml and checking for valid OWNERS shouldn't be too bad. Maybe schedule a report to run once every release? If there are any out of date, create a follow-up issue after the release.