This is an extension of the work we did for charts (https://k8s-testgrid.appspot.com/sig-apps#charts-gce).
For some new applications/frameworks like Spark and Airflow which are supported by the Big Data SIG, it becomes important to performing continuous testing against K8s clusters. This proposal captures the steps that would help set up new tests involving other applications.
cc @krzyzacy @dimberman @kimoonkim
FYI there is some very simple support for invoking arbitrary applications right now per a previous request https://github.com/kubernetes/test-infra/pull/5030, but it has not been used yet.
Nice! Thanks for the link @BenTheElder. Any ideas on how we need to set up logging if we're using that mode to run an arbitrary command?
@foxish it will put in a generic success/failure XML entry from executing the command (using the --test-cmd-name flag) but if you want more logging the best option without any changes is to write to $WORKSPACE/_artifacts which bootstrap will upload and Gubernator/testgrid will look for JUnit type .xml in.
We probably need to flesh this out more, --test-cmd is just env expanded and executed with --test-cmd-args and entered in the kubetest xml output as --test-cmd-name. Pretty much the only contract here besides that xml entry is that the cluster should be managed by kubetest if you use some of the other flags.
cc @ssuchter
FWIW based on my experience with using prow for tensorflow/k8s here's how I'm approaching CI/CD with prow
One pain point for me right now is properly converting the outputs of each step to a junit file and uploading to GCS for gubernator. (See for example tensorflow/k8s#229). So libraries or other tooling that played well with Argo and prevented me from having to manually build and upload junit files would be very helpful.
Documenting here as a todo item for this issue, we should remove the check for working directory in https://github.com/kubernetes/test-infra/blob/40e8b22e3c37457c973c118b2aefa097b1d063ed/kubetest/main.go#L386-L388.
This assumes that we're always downloading and running from within the kubernetes directory which may not be true for other application level tests.
cc @krzyzacy
This seems like a candidate for work that would land in the shared test framework repo. Registering shell script paths inside of code in kubetest to run against a cluster that got stood up instead of cleanly passing a admin.kubeconfig (or similar) from kubetest to the test code seems unmanageable for scale. Why not have one binary (kubetest) in charge of managing cluster lifecycle and communicate with another (test) binary via connection information?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close