Our team (KUnit) is working on adding unit testing into the Linux kernel and, as such, is interested in a solution for running presubmit on patches sent to the mailing list and replying back. This would conceivably be easiest if we could run a mail server on prow to hold received patches and apply them to the upstream kernel during the cloning phase. The mail server should also send emails back to the mailing list, functionality which has been requested here: #8668
/area prow
/kind feature
@krzyzacy
cc @BenTheElder @kargakis @stevekuznetsov @cjwagner @fejta @spiffxp in case there are concerns.
discussed offline with @BenTheElder and @Katharine, storing patches in CRD might not be a good idea, we might be able to bring up a git server, and somehow figure out mapping patches into git branches, and assign a subdomain so jobs can clone from the git server.
That part doesn't have to be part of prow, it can just be a k8s controller. We can still use existing clone_refs logic to handle testing. You can even trigger a prowjob via pubsub (@sebastienvas gonna take a stab there). Then the only thing you need will be a prow reporter :-)
Please make sure this has a reviewed design doc before any implementation begins
yeah trying to throw out some initial ideas here, punt to @AviKndr to write a design doc and potentially present the idea in sig-testing
This functionality could be achieved by running your own git server with pullable refs for each patch submitted, then creating ProwJob CRDs directly and requesting a decorated job that pulls from your server. None of the components would necessarily need to live with the other Prow components but the latter service would need RBAC to create ProwJob CRDs on the Prow cluster.
True as @krzyzacy mentioned this almost requires that we split out reporting from plank.
Hello everyone, here is an updated design doc for this functionality: link
In Summary:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Please make sure this has a reviewed design doc before any implementation begins