ref: https://github.com/cncf/devstats/issues/158
I ran a script to generate the SIG-based grouping last time, but of course like a genius I don't think I actually checked it in. Creating the issue here so it can be /assign'ed and maybe handled by one of the other devstats subproject owners. I'm not sure when I'll be able to get to this.
/area devstats
/sig contributor-experience
/priority important-longterm
/assign @jberkus @Phillels @spiffxp
subproject owners I can assign to
Wrote a very-quick-and-hacky script to generate the list from sigs.yaml and created a PR against the devstats repo: https://github.com/cncf/devstats/pull/167.
If the folks assigned here could review the PR, that'd be great. :)
What's your script? Mine was also hacky and I was embarrassed to share and that's what let me lose it to the sands of time (I think, I'm gonna go double check now)
We should at least put it someplace to have some prior art to source from if we are ever so inspired as to run it regularly
What's your script? Mine was also hacky and I was embarrassed to share
Lol, mine was too so I didn't share it right away. I'll clean it up a bit and post it here for posterity though.
It mainly uses a yaml.Unmarshal
on sigs.yaml and then trims the github links for owners files to get the list of repos.
We should at least put it someplace to have some prior art to source from if we are ever so inspired as to run it regularly
I'll follow up on this. :+1:
/kind bug
/assign
/lifecycle active
/milestone May
I've updated https://github.com/cncf/devstats/pull/167. But I've used the same hacky script (just updated sorting + removed staging repos), so I haven't added the script yet :see_no_evil:
To close this out, I would recommend:
/milestone Next
I'll work to update the CNCF PR (or open a new PR) with the script that landed in #3778
add support for generating this file to the existing generator code
This part hasn't been done yet. I am personally fine with the existing python script unless there is a strong need to move away from it.
What I feel is necessary to close this out: how to handle this periodically.
We discussed in https://github.com/cncf/devstats/pull/192 how often we want to refresh these repos, and settled on quarterly. This is infrequent enough that a human is likely to forget without being reminded, but also that dumping time into automating this maybe isn't worthwhile either. Suggestions on how to handle this? Periodic entry on a shared contribex calendar or something?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I had started working on this long ago, but don't have bandwidth to clean up the script and make it almost automate-able. However, I'm happy to work with someone to drive this to completion.
/help
This isn't really a bug anymore since the repos were updated in devstats.
/kind feature
/remove-kind bug
@nikhita:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
I had started working on this long ago, but don't have bandwidth to clean up the script and make it almost automate-able. However, I'm happy to work with someone to drive this to completion.
/helpThis isn't really a bug anymore since the repos were updated in devstats.
/kind feature
/remove-kind bug
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@nikhita 1 question: Does it make sense to remove the current list of assignees?
Also I would be willing to take this up if you and I could pair but me doing the actual work
Yay! :)
And, yes I think it makes sense to update the assignees. I'll keep myself assigned so I can keep track of this and help out.
/unassign @spiffxp @jberkus @Phillels
/assign @markyjackson-taulia
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I know it's been a bit 馃槵 but just wanted to check back in. Do you think theres enough bandwdith to tackle this in the 1.18 cycle?
@mrbobbytables I think this should be targeted for v.1.19. Thoughts @nikhita?
I still plan to work with @nikhita on this
I would really like to help with this, but I just don't know where those new repos should be assigned.
@lukaszgryglicki we can pair if you would like
All I need is a list of all new repos and what repo group they should be assigned to. I'll do the rest. So for example:
Once I have that list, I'll update config for DevStats and regenerate data.
Here is the list of repositories that are falling back to "Other" repository group.
other.txt should be saved as other.csv
/unassign @markyjackson-taulia
/assign @lukaszgryglicki
Great, but I cannot proceed on this, I need the community to actually define repository groups for those repos. Once I have that, I'll update devstats.
/assign
I can take care of creating those mappings, we should add a task somewhere to update them quarterly...I think they just keep falling off the radar.
we used to have devstats meetings and working meetings. the cadence was too much at the time but might be good to kick that up again on a quarterly basis and even if no agenda, could do this as an audit working together.
Right now I'm thinking of just making it a task as part of quarterly planning. @lukaszgryglicki is going to add a dashboard of repo -> repo group mappings. If any appear in there under the 'other' category that we aren't already familiar with, create an issue to update the repo mappings.
New dashboard showing repository groups - repositories assignments implemented: https://github.com/cncf/devstats/issues/235#issuecomment-585110324
@lukaszgryglicki I've created a sheet with the mappings here: https://docs.google.com/spreadsheets/d/1vpM7CkLH0PZsXm_lkFA5iUOCecGkX1HxxK8Zz5Hkgt4/edit#gid=0
Some don't really have a direct owner...they pre-date sigs and I couldn't track down a group who "should" be the owner now.
FYI - few other little things on that sheet:
sig-usability
that should be added as a repo group. They own 1 repo (kubernetes-sigs/SIG-usability
)Will update DevStats tomorrow.
I'm working on this, I'm __adding__ assignments repo - repo group from this document, we already have a lot of assignments defined, I'm only checking if there are no collisions with the document, and if there are any I'm using the document as a higher priority source. Current definitions are here.
After updating repo groups definitions we now have 23 repos falling back to "Other":
repo_group | number_of_repos
----------------------------+-----------------
SIG Storage | 41
SIG API Machinery | 39
SIG Cloud Provider | 28
SIG Cluster Lifecycle | 27
Other | 23
SIG Node | 18
SIG Docs | 14
SIG Contributor Experience | 13
SIG Network | 11
SIG Scheduling | 11
SIG Instrumentation | 10
SIG Apps | 8
SIG UI | 7
SIG Autoscaling | 6
SIG CLI | 6
SIG Release | 6
Steering Committee | 6
SIG Multicluster | 5
SIG Testing | 5
Kubernetes | 3
SIG Service Catalog | 3
SIG Windows | 3
SIG Architecture | 2
SIG PM | 2
Product Security Committee | 1
SIG Auth | 1
SIG Scalability | 1
SIG Usability | 1
(28 rows)
Here are those 23 "Other" repos:
kubernetes/application-images
kubernetes/common
kubernetes/demos-and-tutorials
kubernetes-graveyard/kube-mesos-framework
kubernetes-incubator/application-images
kubernetes-incubator/auger
kubernetes-incubator/kube2consul
kubernetes-incubator/kube-mesos-framework
kubernetes-incubator-retired/kubedash
kubernetes/kube2consul
kubernetes/kubedash
kubernetes-retired/application-images
kubernetes-retired/community
kubernetes-retired/contrib
kubernetes-retired/kubedash
kubernetes-retired/kube-mesos-framework
kubernetes-sigs/auger
kubernetes-sigs/compute-persistent-disk-csi-driver-
kubernetes-sigs/foo
kubernetes-sigs/gcp-filestore-csi-driver
kubernetes-sigs/go-open-service-broker-client
kubernetes-sigs/vsphere-csi-driver
kubernetes-sig-testing/frameworks
Populating on the test server. If there are no problems, I'll populate this on prod tomorrow (this takes few hours to complete, but test server is available during the process, only some dashboards might be unavailable for the time given dashboard is regenerating).
I few small additional repo mappings from looking over the "other" list:
EDIT: I updated the sheet and highlighted the different ones in green.
Finished, see https://k8s.devstats.cncf.io.
Awesome, thanks!
Next steps for us on contribex side is documenting some place the responsibilities of updating the list.
I've updated our community maintenance doc in #4544 with a note about reviewing the dashboard and making sure they are a part of a repo group. 馃憤
You can now see repository/repository groups configuration using new DevStats REST API (described here).
Example call to get that configuration for kubernetes and filter to show only repositories falling back to Other
repository group:
curl -H "Content-Type: application/json" https://devstats.cncf.io/api/v1 -d'{"api":"Repos","payload":{"project":"kubernetes", "repository_group":["Other"]}}' 2>/dev/null | jq
:{
"project": "kubernetes",
"db_name": "gha",
"repo_groups": [
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other",
"Other"
],
"repos": [
"kubernetes/application-images",
"kubernetes/common",
"kubernetes-csi/external-health-monitor",
"kubernetes/demos-and-tutorials",
"kubernetes-graveyard/kube-mesos-framework",
"kubernetes-incubator/application-images",
"kubernetes-incubator/auger",
"kubernetes-incubator/kube2consul",
"kubernetes-incubator/kube-mesos-framework",
"kubernetes-incubator-retired/kubedash",
"kubernetes/kube2consul",
"kubernetes/kubedash",
"kubernetes-retired/application-images",
"kubernetes-retired/community",
"kubernetes-retired/contrib",
"kubernetes-retired/kubedash",
"kubernetes-retired/kube-mesos-framework",
"kubernetes-sigs/auger",
"kubernetes-sigs/foo",
"kubernetes-sigs/go-open-service-broker-client",
"kubernetes-sigs/secrets-store-csi-driver",
"kubernetes-sig-testing/frameworks"
]
}
To get all repositories config provide single value["All"]
: curl -H "Content-Type: application/json" https://devstats.cncf.io/api/v1 -d'{"api":"Repos","payload":{"project":"kubernetes", "repository_group":["All"]}}' 2>/dev/null | jq
Other options: curl -H "Content-Type: application/json" https://devstats.cncf.io/api/v1 -d'{"api":"Repos","payload":{"project":"kubernetes", "repository_group":["SIG Apps", "SIG Storage", "Kubernetes"]}}' 2>/dev/null | jq
:
{
"project": "kubernetes",
"db_name": "gha",
"repo_groups": [
"Kubernetes",
"Kubernetes",
"SIG Apps",
"SIG Apps",
"SIG Apps",
"SIG Apps",
"SIG Apps",
"SIG Apps",
"SIG Apps",
"SIG Apps",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage",
"SIG Storage"
],
"repos": [
"GoogleCloudPlatform/kubernetes",
"kubernetes/kubernetes",
"kubernetes/application-dm-templates",
"kubernetes/examples",
"kubernetes-incubator/kompose",
"kubernetes/kompose",
"kubernetes-sigs/app",
"kubernetes-sigs/application",
"kubernetes-sigs/apps_application",
"kubernetes-sigs/execution-hook",
"kubernetes-csi/cluster-driver-registrar",
"kubernetes-csi/csi-driver-cinder",
"kubernetes-csi/csi-driver-fibre-channel",
"kubernetes-csi/csi-driver-flex",
"kubernetes-csi/csi-driver-host-path",
"kubernetes-csi/csi-driver-image-populator",
"kubernetes-csi/csi-driver-iscsi",
"kubernetes-csi/csi-driver-nfs",
"kubernetes-csi/csi-flex-provisioner",
"kubernetes-csi/csi-lib-common",
"kubernetes-csi/csi-lib-fc",
"kubernetes-csi/csi-lib-iscsi",
"kubernetes-csi/csi-lib-utils",
"kubernetes-csi/csi-proxy",
"kubernetes-csi/csi-release-tools",
"kubernetes-csi/csi-test",
"kubernetes-csi/docs",
"kubernetes-csi/driver-registrar",
"kubernetes-csi/drivers",
"kubernetes-csi/external-attacher",
"kubernetes-csi/external-attacher-csi",
"kubernetes-csi/external-provisioner",
"kubernetes-csi/external-provisioner-csi",
"kubernetes-csi/external-resizer",
"kubernetes-csi/external-snapshotter",
"kubernetes-csi/flex-provisioner",
"kubernetes-csi/kubernetes-csi.github.io",
"kubernetes-csi/kubernetes-csi-migration-library",
"kubernetes-csi/livenessprobe",
"kubernetes-csi/node-driver-registrar",
"kubernetes-csi/resources",
"kubernetes/csi-translation-lib",
"kubernetes/git-sync",
"kubernetes-incubator/external-storage",
"kubernetes-incubator/nfs-provisioner",
"kubernetes-retired/nfs-provisioner",
"kubernetes-sigs/sig-storage-lib-external-provisioner",
"kubernetes-sigs/sig-storage-local-static-provisioner"
]
}
Most helpful comment
/assign
I can take care of creating those mappings, we should add a task somewhere to update them quarterly...I think they just keep falling off the radar.