Community: Clean up github service accounts

Created on 27 Jun 2018  ·  51Comments  ·  Source: kubernetes/community

I'd like to clean up the following GitHub service accounts:

  • cadvisorJenkinsBot
  • k8s-bot
  • k8s-cherrypick-bot
  • k8s-external-contributor
  • k8s-mirror-api-machinery-api-reviews
  • k8s-mirror-api-machinery-bugs
  • k8s-mirror-api-machinery-feature-rqusts
  • k8s-mirror-api-machinery-misc
  • k8s-mirror-api-machinery-pr-reviews
  • k8s-mirror-api-machinery-proposals
  • k8s-mirror-api-machinery-test-failures
  • k8s-mirror-api-reviews
  • k8s-mirror-architecture-api-reviews
  • k8s-mirror-architecture-bugs
  • k8s-mirror-architecture-feature-request
  • k8s-mirror-architecture-misc
  • k8s-mirror-architecture-pr-reviews
  • k8s-mirror-architecture-proposals
  • k8s-mirror-architecture-test-failures
  • k8s-mirror-auth-api-reviews
  • k8s-mirror-auth-bugs
  • k8s-mirror-auth-feature-requests
  • k8s-mirror-auth-misc
  • k8s-mirror-auth-pr-reviews
  • k8s-mirror-auth-proposals
  • k8s-mirror-auth-test-failures
  • k8s-mirror-azure-api-reviews
  • k8s-mirror-azure-bugs
  • k8s-mirror-azure-feature-requests
  • k8s-mirror-azure-misc
  • k8s-mirror-azure-pr-reviews
  • k8s-mirror-azure-proposals
  • k8s-mirror-azure-test-failures
  • k8s-mirror-cli-api-reviews
  • k8s-mirror-cli-bugs
  • k8s-mirror-cli-feature-requests
  • k8s-mirror-cli-misc
  • k8s-mirror-cli-pr-reviews
  • k8s-mirror-cli-proposals
  • k8s-mirror-cli-test-failures
  • k8s-mirror-cluster-lifecycle-api-review
  • k8s-mirror-cluster-lifecycle-bugs
  • k8s-mirror-cluster-lifecycle-feature-re
  • k8s-mirror-cluster-lifecycle-misc
  • k8s-mirror-cluster-lifecycle-pr-reviews
  • k8s-mirror-cluster-lifecycle-proposals
  • k8s-mirror-cluster-lifecycle-test-failu
  • k8s-mirror-gcp-api-reviews
  • k8s-mirror-gcp-bugs
  • k8s-mirror-gcp-feature-requests
  • k8s-mirror-gcp-misc
  • k8s-mirror-gcp-pr-reviews
  • k8s-mirror-gcp-proposals
  • k8s-mirror-gcp-test-failures
  • k8s-mirror-ibmcloud-misc
  • k8s-mirror-release-api-reviews
  • k8s-mirror-release-bugs
  • k8s-mirror-release-feature-requests
  • k8s-mirror-release-misc
  • k8s-mirror-release-pr-reviews
  • k8s-mirror-release-proposals
  • k8s-mirror-release-test-failures
  • k8s-mirror-scalability-api-reviews
  • k8s-mirror-scalability-bugs
  • k8s-mirror-scalability-feature-requests
  • k8s-mirror-scalability-misc
  • k8s-mirror-scalability-pr-reviews
  • k8s-mirror-scalability-proposals
  • k8s-mirror-scalability-test-failures
  • k8s-mirror-storage-bugs
  • k8s-mirror-storage-feature-requests
  • k8s-mirror-storage-misc
  • k8s-mirror-storage-pr-reviews
  • k8s-mirror-storage-proposals
  • k8s-mirror-storage-test-failures
  • k8s-mirror-testing-api-reviews
  • k8s-mirror-testing-bugs
  • k8s-mirror-testing-feature-requests
  • k8s-mirror-testing-misc
  • k8s-mirror-testing-pr-reviews
  • k8s-mirror-testing-proposals
  • k8s-mirror-testing-test-failures
  • k8s-mirror-vmware-bugs
  • k8s-mirror-vmware-misc
  • k8s-mirror-vmware-proposals
  • k8s-mirror-vmware-test-failures
  • k8s-mirror-wg-iot-edge
  • k8s-oncall
  • k8s-publish-robot
  • k8s-reviewable
  • k8s-sig-onprem-api-reviews
  • k8s-sig-onprem-bugs
  • k8s-sig-onprem-feature-requests
  • k8s-sig-onprem-misc
  • k8s-sig-onprem-pr-reviews
  • k8s-sig-onprem-proposals
  • k8s-sig-onprem-test-failures
  • k8s-sig-openstack-api-reviews
  • k8s-sig-openstack-bugs
  • k8s-sig-openstack-feature-requests
  • k8s-sig-openstack-pr-reviews
  • k8s-sig-openstack-proposals
  • k8s-sig-openstack-test-failures
  • k8s-slack

Does anyone have any objections to this?

cc: @kubernetes/sig-testing-misc @kubernetes/sig-contributor-experience-misc-use-only-as-a-last-resort

aregithub-management areprovideaws kincleanup lifecyclrotten prioritimportant-longterm siapi-machinery siarchitecture siautoscaling sicli sicontributor-experience simulticluster sitesting

Most helpful comment

I'd suggest maybe holding onto the *bot accounts, we can only pack so much
automation on to each and I know @fejta at least is building new tools to
automate GitHub toil.

On Jun 27, 2018 11:11, "Christoph Blecker" notifications@github.com wrote:

@bgrant0607 https://github.com/bgrant0607


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/community/issues/2324#issuecomment-400778637,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq0y8ve-GeKiu3mAjzwysdWDYOkTRks5uA8q1gaJpZM4U6JwD
.

All 51 comments

also cc: @calebamiles @idvoretskyi @grodrigues3

@bgrant0607

@cblecker no objections from me.

I'd suggest maybe holding onto the *bot accounts, we can only pack so much
automation on to each and I know @fejta at least is building new tools to
automate GitHub toil.

On Jun 27, 2018 11:11, "Christoph Blecker" notifications@github.com wrote:

@bgrant0607 https://github.com/bgrant0607


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/community/issues/2324#issuecomment-400778637,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq0y8ve-GeKiu3mAjzwysdWDYOkTRks5uA8q1gaJpZM4U6JwD
.

Azure mirrors (excluding k8s-mirror-azure-misc) have been destroyed as part of #2313: https://github.com/kubernetes/community/pull/2313#issuecomment-400189640

On the bots topic, I know we need to keep:
@k8s-ci-robot - k8s-ci-robot
@k8s-merge-robot - Kubernetes Submit Queue k8s-merge-robot

I'm proposing that we kick the following bots out of the org:
@cadvisorJenkinsBot - cadvisorJenkinsBot
@k8s-bot - Kubernetes Bot k8s-bot
@k8s-cherrypick-bot - k8s-cherrypick-bot
@k8s-publish-robot - k8s-publish-robot k8s-publish-robot
@k8s-slack - k8s-slack

If we need more tokens for future automation, we can either re-add these accounts, or make new ones. I don't think we should have privileged tokens just hanging around if we aren't actively using them.

I'm also going to put a time box on this.. Leaving open for comment until July 11 at 5pm PDT as I know a lot of people are OoO for the July 4th holiday.

Can't we delete the tokens [1] / de-privilege the accounts while still
reserving the usernames / accounts (though perhaps not cadvisorJenkinsBot)?

I'm also not sure, what account does the publisher robot use?
k8s-publish-robot sounds like it.

[1] https://blog.github.com/2013-05-16-personal-api-tokens/

On Wed, Jul 4, 2018, 12:18 Christoph Blecker notifications@github.com
wrote:

On the bots topic, I know we need to keep:
@k8s-ci-robot https://github.com/k8s-ci-robot - k8s-ci-robot
@k8s-merge-robot https://github.com/k8s-merge-robot - Kubernetes Submit
Queue k8s-merge-robot

I'm proposing that we kick the following bots out of the org:
@cadvisorJenkinsBot https://github.com/cadvisorJenkinsBot -
cadvisorJenkinsBot
@k8s-bot https://github.com/k8s-bot - Kubernetes Bot k8s-bot
@k8s-cherrypick-bot https://github.com/k8s-cherrypick-bot -
k8s-cherrypick-bot
@k8s-publish-robot https://github.com/k8s-publish-robot -
k8s-publish-robot k8s-publish-robot
@k8s-slack https://github.com/k8s-slack - k8s-slack

If we need more tokens for future automation, we can either re-add these
accounts, or make new ones. I don't think we should have privileged tokens
just hanging around if we aren't actively using them.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/community/issues/2324#issuecomment-402545934,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq5lFBATEq4zBbYQ5pMgC6pOWlhxhks5uDRTzgaJpZM4U6JwD
.

Kicking the bots out of the org doesn't delete the accounts or release the names. We could re-add the accounts to the org at any time. It just means they don't have any privileges.

The staging repo publishing bot is @k8s-publishing-bot (don't want to kick it out)

Ah, thanks. I misunderstood "clean up the following GitHub service
accounts" as 'delete the following accounts'. 👍

SGTM

On Wed, Jul 4, 2018, 12:30 Christoph Blecker notifications@github.com
wrote:

Kicking the bots out of the org doesn't delete the accounts or release the
names. We could re-add the accounts to the org at any time. It just means
they don't have any privileges.

The staging repo publishing bot is @k8s-publishing-bot
https://github.com/k8s-publishing-bot (don't want to kick it out)


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/community/issues/2324#issuecomment-402547511,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq7_iXfIDAjKhMI8bP-ozpeZ0VFMWks5uDRftgaJpZM4U6JwD
.

@k8s-publishing-bot is the publishing bot. @k8s-publish-robot though can be removed, it's old.

This is followup to https://github.com/kubernetes/community/pull/2302. Is there an umbrella issue to catch cleaning up detritus related to that? ie: I think devstats needs a touchup, I personally would like to nuke the k8s-mirror accounts I have credentials for, we're not sure what to do about the teams and renaming and the bots that apply labels based on their usage, etc.

All the accounts listed above have been removed from the org. We should decide which ones are worth going back and deleting, but as of now, they don't have access to anything in kubernetes/

I removed the sig-testing accounts https://groups.google.com/forum/#!topic/kubernetes-sig-testing/h9MnFx348Gs

/assign

@idvoretskyi ping

what's the plan to call this issue done?

  • do nothing
  • update sigs.yaml to remove references to non-existent google groups
  • for the k8s-mirror, k8s-sig users

    • delete all referenced github accounts

    • delete all referenced google groups

  • or some combination of the above?

@spiffxp

update sigs.yaml to remove references to non-existent google groups

for the k8s-mirror, k8s-sig users

delete all referenced google groups

This - immediately.

delete all referenced github accounts

This - after I'll collect the 2FA keys from the github account owners.

/area github-management

I will be adjusting the doc generator / sigs.yaml to stop linking the mirror google groups in a PR later today

EDIT: https://github.com/kubernetes/community/pull/2500

FYI, I have just deleted k8s-mirror-ibmcloud-misc GitHub account.

sig-auth mirror accounts and secondary mailing lists have been removed

k8s-external-contributor has been deleted

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Sent out the notification to https://groups.google.com/d/msg/kubernetes-sig-contribex/rzwrLOLHgR8/P8A8nuqhFQAJ; the deadline to reach the lazy consensus is Feb 1st.

  • k8s-sig-onprem-api-reviews
  • k8s-sig-onprem-bugs
  • k8s-sig-onprem-feature-requests
  • k8s-sig-onprem-misc
  • k8s-sig-onprem-pr-reviews
  • k8s-sig-onprem-proposals
  • k8s-sig-onprem-test-failures
  • k8s-sig-openstack-api-reviews
  • k8s-sig-openstack-bugs
  • k8s-sig-openstack-feature-requests
  • k8s-sig-openstack-pr-reviews
  • k8s-sig-openstack-proposals
  • k8s-sig-openstack-test-failures

I can confirm that I can remove the above ^

SIG-OnPrem GitHub accounts and mailing lists are successfully deleted.

The following groups own the mailing lists that still exist:

@kubernetes/sig-api-machinery-misc
@kubernetes/sig-architecture-misc-use-only-as-a-last-resort
@kubernetes/sig-autoscaling-misc
@kubernetes/sig-aws-misc
@kubernetes/sig-cli-misc
@kubernetes/sig-cloud-provider
@kubernetes/sig-cluster-lifecycle
@kubernetes/sig-multicluster-misc
@kubernetes/sig-release
and SIG-VMware.

I can delete these mailing lists after at least +1 from a SIG-lead of the respective group.

I'll take care of removing the GitHub teams.

/assign @nikhita

/priority important-soon

/kind cleanup

/lifecycle active

/milestone May

/milestone Next

This has been sitting open for a while. What is our definition of done here?

There are lots of incremental updates here, but I no longer have a sense of:

  • whether all the github accounts have been deleted
  • whether all the associated mailing lists have been deleted
  • whether there is a desire to delete all of the associated prefixed teams (it's not clear to me how to gather data on whether these teams are being used or not)

The most recent update talks about a few stragglers from a mailing list perspective
https://github.com/kubernetes/community/issues/2324#issuecomment-462736648

/remove-priority important-soon
/priority important-longterm
Given the speed with which we've been moving on this. Ideally I would like to scope this down to something we can close out this release cycle. I am comfortable calling the remaining accounts and mailing list abandoned detritus if necessary, but would like to at least understand what remains

I think I had agreed to work on this to remove GitHub teams in a contribex meeting but revisiting this now, I'm confused...did we intend to delete the GitHub accounts listed in the main PR body or also the prefixed GitHub teams for each SIG?

whether there is a desire to delete all of the associated prefixed teams (it's not clear to me how to gather data on whether these teams are being used or not)

Anecdotally, some SIGs use the prefixed teams heavily, some don't use that much. Don't really have any data on this. :/
I feel like there is some value in having these teams but it could use an audit -- though this feels like a separate issue to me.

If we intended to delete the GitHub accounts: @idvoretskyi, do you have access to all google groups/emails behind these accounts? Or is that only SIG leads?

Related: issue for establishing team structure for SIGs (https://github.com/kubernetes/community/issues/2323)

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

On Wed 20 Nov, 2019, 6:03 AM fejta-bot, notifications@github.com wrote:

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta
https://github.com/fejta.
/lifecycle stale


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/community/issues/2324?email_source=notifications&email_token=AD24BUAG7QMIIRGCHT7IHR3QUU7TDA5CNFSM4FHITQB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEESCFGY#issuecomment-556016283,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AD24BUAY2Z3XXTOK5B7XWODQUU7TDANCNFSM4FHITQBQ
.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

This is the most commented topic so I support it! Devs need to communicate.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

On Wed, Jun 24, 2020 at 4:03 AM fejta-bot notifications@github.com wrote:

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta
https://github.com/fejta.
/lifecycle rotten


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/community/issues/2324#issuecomment-648462669,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AD24BUEUGARTJUJA3QMYXYDRYEUSFANCNFSM4FHITQBQ
.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Will follow up at the next github admin meeting.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

parispittman picture parispittman  ·  4Comments

casusbelli picture casusbelli  ·  4Comments

castrojo picture castrojo  ·  3Comments

spiffxp picture spiffxp  ·  5Comments

zacharysarah picture zacharysarah  ·  3Comments