Test-infra: Support/document how to do multitenancy in prow

Created on 20 Apr 2018  路  14Comments  路  Source: kubernetes/test-infra

This has come up in two different discussions I have had during this week. We need to be able to allow users to bring in their own bots/tokens and facilitate all of them under a single prow deployment.

Strawman implementation:

  • use hook as an external plugin and have it have its own oauth secret and config files (config, plugin) (possible today, needs to be documented)
  • split reporting into its own controller, reuse a single instance of plank (@krzyzacy's proposal)
  • run a different instance of tide for merging?

We may also want to do something about external plugin config (https://github.com/kubernetes/test-infra/issues/7262) but not a hard requirement.

/area prow
/kind enhancement
/kind docs

@stevekuznetsov @cjwagner @fejta @BenTheElder thoughts?

areprow kindocumentation kinfeature lifecyclrotten

All 14 comments

Where would the other deployments live? In a separate cluster? Then you're incurring maintenance cost anyway, why not just deploy a full prow?

Where would the other deployments live? In a separate cluster? Then you're incurring maintenance cost anyway, why not just deploy a full prow?

Same cluster I would expect but at the point you need to redeploy half of the components then a separate deployment may make sense. There are cases though where a full prow deployment is not needed, for example, we need to reuse prow commands apart from running tests (trigger plugin is off) in a repository we don't control.

There are cases though where a full prow deployment is not needed, for example, we need to reuse prow commands apart from running tests (trigger plugin is off) in a repository we don't control.

I guess even in this case you can argue that you don't need to run hook as an external plugin...

Ah I see you are not asking for secrecy on the tokens but just sharing them. We should just do a client pool and index into it by org/repo to allow multiple tokens for reporting.

Let me loop in the openshiftio folks who were asking about this

Why would a repo want to supply their own bot? Just to control the name and avatar?

Why would a repo want to supply their own bot? Just to control the name and avatar?

Yes, it may sound naive but it's important when you have totally unrelated projects under the same deployment. You also control the lifetime of the token and assuming prow supports secret reload, and the platform you use provides you the ability to update the token, you can rotate your secrets w/o asking somebody else to do it for you.

Again, this does not justify why you wouldn't run a separate deployment altogether and I want us to figure out what's the best path in this issue and document it.

@spxtr

Alternative:

Client pools; secrets reloaded/reread from a directory. No need to restart services, just add a new key in the secret.

Main question:

  • How can we keep a dynamic map of tokens -> repos?

How can we keep a dynamic map of tokens -> repos?

Assuming each repo has its own .prow.yml config, it should be easier to do this.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings