based on discussion in https://github.com/kubernetes/test-infra/issues/7371
Since Zuul supports github webhooks, if we can write a prow plugin that could end up making similar calls to the zuul github webhook endpoint, it would make the integration very simple. The current approaches of the Github App and plain webhook both change the user experience for those specific jobs. Specifically, we should be able to /test and /retest the jobs that run on zuul just like other jobs.
cc @fejta @calebamiles
/assign @dims
In order to support /test and /retest, we need:
agent: openlab or agent: zuulPOST or whatever command to zuul/open label when new ProwJobs arrive with the agent: openlab/zuul, this is how jenkins integration works. Something like what tristanC postedApp installed for cloud-provider-openstack repository
Thanks @calebamiles !
I don't think you need to involve github webhooks from the zuul side if you want prow integration.
Github webhooks ---> prow hook endpoint ---[CREATE prowjobs]---> prowjob controller ----> github status
You only need to write a prowjob controller that knows how to poll the zuul api. At least, that's what we do today with Jenkins.
We have a similar requirement for setting up e2e CI for 'kubernetes/cloud-provider-azure'.
Basically we could setup a standard Jenkins server for running e2e test jobs, and it works well now with github hooks directly. Thus one way to integrate it is to have prow to 'forward' the github webhook call to a user provided webhook endpoint.
@kargakis @fejta Could you please help on following questions related to 'prow&jenkins'?
Following the flow webhook -> prow -> prow jenkins-operator -> jenkins server
If we create a job with agent: jenkins, a jenkins controller would pick it, and calls jenkins server.
However, it seems jenkins-operator deployment was removed here: https://github.com/kubernetes/test-infra/pull/6679
Does it mean we need to bring it back to get the flow work?
The jenkins server URL needs to be configured during start, does it mean that prow would only support
single jenkins instance?
Suppose 2 vendors provide their own jenkins server endpoint, then they would need to create 2 separate jenkins deployment and then add 2 agent types?
Where does the auth token goes? They could be passed through secrets on the prow cluster. But how are the values passed from (jenkins) vendors?
Jenkins supports public key authentication,maybe one approach is to ask prow to announce one ssh public key, and then jenkins vendors can set it up on the servers.
Please kindly correct me if there's any misunderstanding about prow and jenkins controller.
Since many CI integration service supports standard github webhook, I'd suggest that could be an option of prow backend.
@karataliu
jenkins-operator supports selecting prowjobs via label selectors. As a matter of fact, we run two jenkins-operators in OpenShift CI as part of our prow deployment (see jenkins_operator.yaml). You will need to label your jobs accordingly (see example here).
All Jenkins secrets need to be mounted as volumes in your jenkins-operator. There is support for basic auth, bearer token auth (this is OpenShift-specific afaik), and there is also cert-based auth supported (you can provide a cert, key, and optional ca cert).
Btw, hook (the prow entrypoint service that listens to github webhooks) already supports forwarding webhooks to so called "external plugins". If you wish to go that route you could potentially write a plugin that decrypts the content of the webhook (unclear whether you can send webhooks to hook w/o using the hmac secret) and forwards to Jenkins but in that scenario you are completely bypassing prow which defeats the purpose of using prow in the first place.
Does it mean we need to bring it back to get the flow work?
Now I realized that you are asking this for a kubernetes project. Are you going to run your e2e job as part of the k8s prow deployment or are you thinking of setting up your own prow deployment that will back kubernetes/cloud-provider-azure? Why is Jenkins a requirement?
@kargakis Thanks for the detailed information.
Yes, in previous thread, I'm not referring to running a new prow instance, it's about the current prow instance backing kubernetes/* projects.
Our goal is to add an E2E validation check job for kubernetes/cloud-provider-azure, the job involves calling some script to build image, setting up a cluster on azure, and then running E2E suite against it. We've got a job on jenkins server which can handle this.
The prow jobs run inside container, but our e2e job involves building container images, not sure if that's possible via prow jobs. Thus one solution is to have the prow instance to call an external jenkins.
Do all kubernetes/* projects share single prow instance? Looks like 'kubernetes/cloud-provider-*' projects now already have prow integration, and it is able to run unit tests now.
Any suggestions to getting wired with the current prow instance? Shall the cloud provider vendors start hosting their own prow instances?
@BenTheElder @cjwagner are admins of the k8s Prow installation and can answer your questions better.
Generally, @karataliu you will be asked to run your job in a container. If you can build the images beforehand that would be perfect. I do not think there is any chance that the k8s Jenkins deployments are going to come back, it was a very long process to remove them. I know other jobs build artifacts and stage them somewhere in the cloud for other jobs to use, perhaps Ben can talk about how that works in k8s testing. I would not expect cloud providers to need their own Prow instances but if you have your own functional Jenkins server, triggering jobs on that while those jobs are migrated to containers only may be appropriate.
Do all kubernetes/* projects share single prow instance? Looks like 'kubernetes/cloud-provider-*' projects now already have prow integration, and it is able to run unit tests now.
Yes, there is one prow instance for kubernetes/* and then some, which also tests eg kops-aws testing.
The prow jobs run inside container, but our e2e job involves building container images, not sure if that's possible via prow jobs. Thus one solution is to have the prow instance to call an external jenkins.
We support both docker-in-docker and daemonless builds (EG bazel with the image rules) for creating images on Prow, the former will let you docker build etc.
Any suggestions to getting wired with the current prow instance? Shall the cloud provider vendors start hosting their own prow instances?
So far the pattern for that is creating jobs in the shared prow instance to run the tests etc and for creating external clusters we integrate the provider into kubetest and pass it credentials to some testing account on the provider (EG We manage accounts with GCP credit and Amazon provides an account via the CNCF for testing).
@stevekuznetsov @BenTheElder Thanks both, that's very helpful.
So the recommended way is to add new jobs in shared prow config pool, right?
I can see different ways for creating clusters based on deployment and provider type:
https://github.com/kubernetes/test-infra/blob/1668f43b2b94f88d7860060e3669b9386655eadb/scenarios/kubernetes_e2e.py#L543
Currently neither 'bash', nor 'kops' deployment supports azure, and we usually use acs-engine, which resembles 'kops' in setting up cluster.
What's the recommended way for adding the new provider? By adding a new deployment type or so?
Besides, for a external vendor to provide account information via CNCF, is there a procedure document to follow?
So the recommended way is to add new jobs in shared prow config pool, right?
I think so, generally the cost of running the job from the Prow end is pretty minimal and the wiring is already there, kubetest version maintenance etc. Hopefully in the nearish future this Prow deployment will be explicitly under the CNCF as well.
That said you definitely could hook up some other service someone else runs (Jenkins?), and we can try to figure out how to hook everything together. GKE EngProd migrated all of the tests we run to Prow / Kubernetes / Pods instead of Jenkins Jobs because we can easily check in the config publicly and Kubernetes has been much more stable for us at scale. There's also something nice about running all Kubernetes tests on Kubernetes :-)
Currently neither 'bash', nor 'kops' deployment supports azure, and we usually use acs-engine, which resembles 'kops' in setting up cluster.
kops, bash etc. should all be under kubetest, xref: https://github.com/kubernetes/test-infra/issues/7624
We should be able to add a new deployer to kubetest (there's already a PR out by @adelina-t I believe)
Besides, for a external vendor to provide account information via CNCF, is there a procedure document to follow?
I don't have any actual insight into this and don't actually have (or need) access to the AWS account(s), I'd reach out to @dankohn @idvoretskyi on the CNCF side though if this route makes sense to everyone.
CNCF is basically looking for a request to come in from SIG-testing, ContribX, etc. for us to hold account credits or similar. With a reasonably well-formed request, CNCF is happy to cooperate (and @idvoretskyi will probably be the implementer).
We should be able to add a new deployer to kubetest (there's already a PR out by @adelina-t I believe)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Lots of conversation here about how to best integrate. Xref #https://github.com/kubernetes/test-infra/issues/8652
/close