Hi,
I'm trying to upgrade the Go version for ARM in PR https://github.com/kubernetes/kubernetes/pull/38926
It would be very good to test current master of golang/go against the k8s codebase.
We could have a ci job that fetches latest HEAD of Go, builds a slightly modified version of the cross-image locally, compiles k8s, spins up a lightweight cluster (maybe just with hack/local-up-cluster if we're lazy), and runs the conformance test suite on it.
This way we would catch breaking changes in Go very early on, not then when we're trying to upgrade from 1.x => 1.(x+1).
Is this something we can do soon?
@ixdy @spxtr @rmmh @jessfraz
What phases of a release process are we likely to accept a new Go version? Having a CI job tracking Go tip might not be worth the flakiness of testing bleeding edge features-- would tracking its release tags (1.8-beta1, 1.8-beta2, etc) make more sense?
@rmmh Against HEAD.
This becomes so much more critical when issues like https://github.com/kubernetes/kubernetes/issues/45216 occur.
We could just have a special kube-cross image that always includes the latest Go repo compiled from source inside the image, like this:
FROM gcr.io/google_containers/kube-cross:v1.8.1-1
ENV K8S_HEAD_GOROOT=/usr/local/go_k8s_head
RUN mkdir -p ${K8S_HEAD_GOROOT} \
&& curl -sSL https://github.com/golang/go/archive/master.tar.gz | tar -xz -C ${K8S_HEAD_GOROOT} --strip-components=1
RUN cd ${K8S_HEAD_GOROOT}/src \
&& GOROOT_FINAL=${K8S_HEAD_GOROOT} GOROOT_BOOTSTRAP=/usr/local/go ./make.bash \
&& for platform in ${platforms}; do GOOS=${platform%/*} GOARCH=${platform##*/} GOROOT=${K8S_HEAD_GOROOT} go install std; done
and then just build bins with GOROOT=${K8S_HEAD_GOROOT} and run scalability tests.
cc @wojtek-t @ixdy @bradfitz
Yes, please, against HEAD. The point is to help find Go bugs the same day they're introduced, rather than 8 months after it's too late to do anything about them.
Which Kubernetes tests do we run against Go tip? We certainly can't duplicate all of our testing.
We also shouldn't move forward on this without finding a SIG owner - perhaps scalability? @kubernetes/sig-scalability-misc
(Without an owner, I worry that failing tests will get ignored and this effort will be for naught.)
@ixdy: These labels do not exist in this repository: sig/scalability.
In response to this:
Which Kubernetes tests do we run against Go tip? We certainly can't duplicate all of our testing.
We also shouldn't move forward on this without finding a SIG owner - perhaps scalability? @kubernetes/sig-scalability-misc
(Without an owner, I worry that failing tests will get ignored and this effort will be for naught.)
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
oh, also, this should probably be an issue in the kubernetes/kubernetes repo, not test-infra.
@ixdy Why kubernetes/kubernetes?
I think the scalability team can be the owner of this, and we can run kubemark against Go tip as a non-blocking test suite.
It's a fine line, but this issue is more about testing Kubernetes than about the test-infra supporting Kubernetes. It might get more visibility in the kubernetes repo.
@bradfitz are there any binary artifacts of the toolchain from Go's CI that we could use, rather than continuously rebuilding the Go toolchain ourselves? We have Bazel 98% working and it'd be really handy if I could just point go_repositories somewhere and have it work.
@bradfitz are there any binary artifacts of the toolchain from Go's CI that we could use, rather than continuously rebuilding the Go toolchain ourselves?
There are binary artifacts we keep around from our CI, but we don't yet(?) have a supported/stable interface for making them available to others, despite their URLs currently being public:
curl --silent https://storage.googleapis.com/go-build-snap/go/linux-amd64/2d429f01bd917c42e66e1991eab9c2e33d813d16.tar.gz | tar -zt
But running src/make.bash to compile Go (and not run its tests) is like 40 seconds. Compared to the Kubernetes test duration, an extra 40 seconds wouldn't really be noticeable, would it?
go1.9beta1 is out, let's start testing...
Anyone interested in creating a job for this?
@ixdy any bazel magic we could use here?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
@luxas rules_go has some logic for selecting the toolchain and registering it so we could make this job start by patching WORKSPACE to use a toolchain from head.
https://github.com/bazelbuild/rules_go/blob/master/go/toolchains.rst
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
On Sun, Jul 8, 2018, 12:17 fejta-bot notifications@github.com wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
https://github.com/fejta.
/lifecycle stale—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/1399#issuecomment-403309979,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA4Bq5RwxiinInbgJaJr7oo8A1QHw4n9ks5uElrRgaJpZM4LP8iS
.
/assign
We really ought to be doing this. @shyamjvs @wojtek-t any thoughts on what scalability would like tested? I can setup something to do kubemark against go tip.
We really ought to be doing this. @shyamjvs @wojtek-t any thoughts on what scalability would like tested? I can setup something to do kubemark against go tip.
I would say that if it's supposed to be useful, only a copy of kubemark-gce-scale makes sense.
Maybe run this once per week or sth to not make it yet another very expensive suite.
IMO it would be better if we could somehow leverage our large-scale (optional) presubmits - for e.g trigger them against a test PR changing the go version in k8s. Adding newer dimensions to scalability CI-testing (unless it's very needed) may not be too scalable.
An alternative approach I'd suggest is to setup kubemark CI-testing in a project owned by golang (this would be more convenient and scalable too imo). Wdyt?
IMO it would be better if we could somehow leverage our large-scale (optional) presubmits - for e.g trigger them against a test PR changing the go version in k8s. Adding newer dimensions to scalability CI-testing (unless it's very needed) may not be too scalable.
I don't fully agree. It kind of adds additional dimension, but that should be implicitly owned by golang team.
Presubmits - you can do that when you want to bump go version in k8s, but it can't serve the purpose of periodic, testing of go tip.
So that should be a regular conitnuous testing job (though being run once per week or so).
But I agree with Shyam (I kind of implicitly assumed that) that:
it should be owned by golang team (alternatively failure of that job should open a bug in golang repo or sth like that)
I don't think it should be managed by the golang team, unit testing maybe, but kubemark / end to end testing etc. require a lot more resources and complexity to run. We can report issues back to them though.
I'll hold off on scalability for now while there is still discussion, but sig-testing/sig-release can own something like the conformance suite against golang tip in the meantime to get things started.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
There are binary artifacts we keep around from our CI, but we don't yet(?) have a supported/stable interface for making them available to others, despite their URLs currently being public:
curl --silent https://storage.googleapis.com/go-build-snap/go/linux-amd64/2d429f01bd917c42e66e1991eab9c2e33d813d16.tar.gz | tar -zt
But running
src/make.bashto compile Go (and not run its tests) is like 40 seconds. Compared to the Kubernetes test duration, an extra 40 seconds wouldn't really be noticeable, would it?