@spxtr Since I'm currently not too familiar with bootstrap.py myself, I'll leave it to you to define which functionality that is
Edit from Steve:
Phase: introduce pod utilities into test-infra
clonerefs for cloning https://github.com/kubernetes/test-infra/pull/6228initupload for started.json https://github.com/kubernetes/test-infra/pull/6258gcsupload for GCS upload utility https://github.com/kubernetes/test-infra/pull/6898entrypoint for the wrapper https://github.com/kubernetes/test-infra/pull/7058sidecar for finished.json and uploads https://github.com/kubernetes/test-infra/pull/7058Phase: aggregate pod utilities in config
test-infra using the utilitiesPhase: integrate pod utilities into plank
plank backing in an opt-in manner https://github.com/kubernetes/test-infra/pull/7348 https://github.com/kubernetes/test-infra/pull/7357 https://github.com/kubernetes/test-infra/pull/7362 https://github.com/kubernetes/test-infra/pull/7422 https://github.com/kubernetes/test-infra/pull/7436 https://github.com/kubernetes/test-infra/pull/7470 https://github.com/kubernetes/test-infra/pull/7528 https://github.com/kubernetes/test-infra/pull/7594cc @stevekuznetsov
/assign @fejta
The motivation here is that it's currently gross that all of our testing images that run on prow need to have bootstrap.py and its dependencies (python, git, and gcloud) installed. Ideally our testing images could have only what they need for testing. If we want to run go test ./... then we should be able to use a lightweight go image.
The difficulty is that we do not want to force prow users to have a dependency on google cloud, so we have to be careful about it. A good start would be for prow to figure out how to do the initial git clone and merge the PR in an init container or something, then let things proceed from there.
A good start would be for prow to figure out how to do the initial git clone and merge the PR in an init container or something, then let things proceed from there.
In Origin we are toying with jobs like this as well -- create an image with the source pre-pulled and merged, layer another image with artifacts built, etc, and have our jobs run in pods that resolve those images so one build step can feed a lot of downstream jobs without having to push&pull lots of artifacts.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
/remove-lifecycle stale
/assign
/assign @cjwagner
/assign @stevekuznetsov
This is happening 🎉 https://github.com/kubernetes/test-infra/pull/6228
This is still ongoing. 1.10 is looking unlikely though.
Added a checklist to this one to track progress.
Updated the checklist, highly encourage all to aid in the bike-shedding.
Some quality-of-life fixes are still necessary but the main impl has landed and we can start moving over jobs.
@BenTheElder can we sync to determine a set of jobs to migrate so this can be closed out?
Sure, I think the plan is to migrate all of the jobs though. @cjwagner
has been experimenting with podutils jobs in preparation. I'm not sure how
much we want to move just yet given some bugs had to be fixed along the way
quite recently.
On Mon, Jun 4, 2018, 8:47 AM Steve Kuznetsov notifications@github.com
wrote:
@BenTheElder https://github.com/BenTheElder can we sync to determine a
set of jobs to migrate so this can be closed out?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/3342#issuecomment-394403438,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq5XOajTjbfriN9SIZrKww_d9AVOcks5t5VaVgaJpZM4OPATi
.
Yep, they are stable now and AFAICT a lot of the errors were from private repos. For the migration I would like us to figure out which jobs we are going to migrate first to test it -- create canaries? Then we could write a migrator script? These are not my jobs so I would like to not be doing the actual migration. I'm happy to try to write a migration script to take current PodSpec --> decorated spec and let someone on your end push the changes out as they'd like/
I wouldn't mind seeing them stable a little longer, one of the edge cases
was just for jobs that don't checkout refs IIRC ...
Sen and I had discussed just creating clean jobs that happen to test the
same things, I'm not sure who will own this though. I'm a bit backlogged at
the moment and I think Sen is too.
I don't think a naive migrator script will work well for us because we need
to decouple jobs from other things bootstrap.py does for them, it's even in
the image runners atm..
On Mon, Jun 4, 2018 at 10:21 AM Steve Kuznetsov notifications@github.com
wrote:
Yep, they are stable now and AFAICT a lot of the errors were from private
repos. For the migration I would like us to figure out which jobs we are
going to migrate first to test it -- create canaries? Then we could write a
migrator script? These are not my jobs so I would like to not be doing the
actual migration. I'm happy to try to write a migration script to take
current PodSpec --> decorated spec and let someone on your end push the
changes out as they'd like/—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/3342#issuecomment-394432562,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq-82zRKKgSfsEZlSqN58D-W0djPgks5t5Wy2gaJpZM4OPATi
.
I am going to call this done, migration is a follow-up.
/close
:raised_hands:
Most helpful comment
I am going to call this done, migration is a follow-up.
/close
:raised_hands: