https://go-review.googlesource.com/c/go/+/162698 broke half the x/foo subrepo builders.
Please fix or revert, whichever is fastest.
Failure is like:
https://build.golang.org/log/4b3817714006d5a89b36dc41e679194c48e109a8
linux-amd64 at 62bfa69e6e08fd7406dfa20f93303769456be42c building arch at 36aee92af9e818c36d168359cb0367b0558c7fe1
:: Running /workdir/go/bin/go with args ["/workdir/go/bin/go" "test" "-short" "golang.org/x/arch/..."] and env ["PATH=/workdir/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" "HOSTNAME=buildlet-linux-jessie-rn4975f0d" "DEBIAN_FRONTEND=noninteractive" "HOME=/root" "USER=root" "GO_STAGE0_NET_DELAY=6s" "GO_STAGE0_DL_DELAY=100ms" "WORKDIR=/workdir" "GOROOT_BOOTSTRAP=/workdir/go1.4" "GO_BUILDER_NAME=linux-amd64" "GOROOT_BOOTSTRAP=/go1.4" "GO_DISABLE_OUTBOUND_NETWORK=1" "GOROOT=/workdir/go" "GOPATH=/workdir/gopath" "GOPROXY=http://10.240.0.48:30156" "TMPDIR=/workdir/tmp" "GOCACHE=/workdir/gocache"] in dir /workdir/gopath/src/golang.org/x/arch
go: warning: "golang.org/x/arch/..." matched no packages
no packages to test
tests failed: exit status 1
/cc @bcmills @jayconrod
@bradfitz The fix can be to explicitly set GO111MODULE=auto or off in the builders, in all the instances where GO111MODULE=on is not being set, is that right?
Edit: This is affecting subrepos on tip, not 1.12 and 1.11 release branches.
I don't want to do a workaround. If the default is going to be on, it should work everywhere, else it's only misleadingly on.
I should have some time to look into this tonight.
We probably just need to add (possibly-empty) go.mod files to some repo roots.
As I understand, it's failing only in the subrepos where we haven't added a go.mod file at the root yet. That was okay when GO111MODULE defaulted to auto, but doesn't work when CL 162698 switched the default to on.
I don't want to do a workaround ... it should work everywhere
It should work when the sub-repos have go.mod files.
I think should decide on the timeline of how quickly we should aim to resolve this issue. The rest of the sub-repos will gain go.mod files naturally over the next few days/weeks.
Here's one possible path forward:
GO111MODULE to auto in all situations where it's not already on; this resolves the immediate problem of go.mod-less subrepos failing on Go tipgo.mod files to remaining subrepos. this is work is already underway (e.g., see https://go-review.googlesource.com/q/%2522add+a+go.mod+file%2522)go.mod files added, step 1 can be revertedHow does that sound?
+1 to @dmitshur's path.
Let's get the subrepo tests green first, either by setting GO111MODULE=auto or by temporarily rolling back CL 162698.
Adding go.mod files to subrepos will have side effects outside of our CI, so let's do that cautiously over the coming weeks.
I think should decide on the timeline of how quickly we should aim to resolve this issue.
The timeline is tonight or tomorrow. A broken build is not a healthy place for a project.
If it's as simple as adding some go.mod files, I'll try sending out some CLs now.
Change https://golang.org/cl/166888 mentions this issue: dashboard: set GO111MODULE=auto explicitly for most subrepos
I've sent CL 166888 for step 1.
The timeline is tonight or tomorrow. A broken build is not a healthy place for a project.
I'm in agreement with that timeline. I don't think it should be longer than that, and I don't think it's critical to resolve this immediately (i.e., within 30 mins). We're not in code freeze now.
If it's as simple as adding some go.mod files, I'll try sending out some CLs now.
There are many CLs out already that are going through code review. See https://go-review.googlesource.com/q/%2522add+a+go.mod+file%2522. As @jayconrod said, adding go.mod files can have significant implications to users (not just Go development at tip), so it shouldn't be rushed. E.g., see https://github.com/golang/lint/issues/436 and https://github.com/golang/lint/commit/5614ed5bae6fb75893070bdc0996a68765fdd275.
I deployed @dmitshur's WIP CL 166888 (at git rev https://go.googlesource.com/build/+/3db9ea60cce3c8319cb34f5d3fe57631c00e7ec8) to prod.
@dmitshur:
I'm in agreement with that timeline. I don't think it should be longer than that, and I don't think it's critical to resolve this immediately (i.e., within 30 mins). We're not in code freeze now.
The counter-argument is this: what is the harm in rolling back the CL that broke half the subrepos now, fixing things at leisure, and then restoring things? I see no harm at all, and it removes all the pressure to do anything in a rush. Especially since we are _not_ in the freeze, people are actively trying to do development, and some of them are now broken. Note that more than just the subrepos are broken. @mdempsky pointed out a non-subrepo failure on golang-dev, and I doubt he was just looking for failures—probably he actually ran into this:
Repro:
$ git clone https://go.googlesource.com/go /tmp/go1 $ git clone /tmp/go1 /tmp/go2 $ cd /tmp/go1/src && ./make.bash $ cd /tmp/go2/src && GOROOT_BOOTSTRAP=/tmp/go1 ./make.bashLast command fails early on with:
Building Go cmd/dist using /tmp/go1.
Building Go toolchain1 using /tmp/go1.
go: cannot find main module, but found .git/config in /tmp/go2
to create a module there, run:
cd ../.. && go mod init
go tool dist: FAILED: /tmp/go1/bin/go install -gcflags=-l -tags=math_big_pure_go compiler_bootstrap bootstrap/cmd/...: exit status 1
The breakage has been there for almost two hours at this point. If Matthew's example is not fixed in another 45 minutes (9pm ET), please roll back the offending CL. Don't leave it broken overnight. Thanks.
/cc @bradfitz
The counter-argument is this: what is the harm in rolling back the CL that broke half the subrepos now, fixing things at leisure, and then restoring things? I see no harm at all, and it removes all the pressure to do anything in a rush.
To clarify, my suggestion was proposing an additional alternative path for us to consider, it was not suggesting that it's better than rolling back the offending CL.
I'm not very familiar with the details of the CL, but from what I've seen it's a large CL and it may have been a part of larger stack of CLs. I know @bcmills and @jayconrod are more familiar with the details of the CL, so I trust them to make the right call as to which path to resolution is going to cause less disruption.
I just finished getting my daughter down to bed, so I'm investigating @mdempsky's report now. If the solution is not obvious and simple, I'll commit a rollback within an hour.
The obvious-looking fix didn't work. Rollback CL 166985 is hitting the TryBots now.
(I'll verify that the one rollback is enough to fix the reported failures as soon as I can get git codereview to cooperate enough to download the patch for testing.)
Looks like that clears it up.
~$ unset GO111MODULE
~$ cd $(mktemp -d)
$ git clone ~/go .
Cloning into '.'...
done.
Note: checking out '977ab8d341f463cc104b2526a596d160eb90322c'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
$ cd src
src$ GOROOT_BOOTSTRAP=~/go ./make.bash
Building Go cmd/dist using /usr/local/google/home/bcmills/go.
Building Go toolchain1 using /usr/local/google/home/bcmills/go.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
qBuilding Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for linux/amd64.
---
Installed Go for linux/amd64 in /tmp/tmp.3BM8HNb2Jc
Installed commands in /tmp/tmp.3BM8HNb2Jc/bin
src$
Sorry for the breakage, folks. I'll be sure to double check the clone-and-bootstrap operation and subrepo builds for the next attempt.
Verified that a clean clone self-bootstraps successfully, and the subrepo chart appears to be filling in ok.
~$ cd $(mktemp -d)
$ git clone https://go.googlesource.com/go ./go1
Cloning into './go1'...
remote: Sending approximately 226.51 MiB ...
remote: Counting objects: 40, done
remote: Finding sources: 100% (18/18)
remote: Total 373631 (delta 286638), reused 373628 (delta 286638)
Receiving objects: 100% (373631/373631), 225.84 MiB | 36.04 MiB/s, done.
Resolving deltas: 100% (286638/286638), done.
$ git clone ./go1 ./go2
Cloning into './go2'...
done.
$ cd ./go1/src
go1/src$ ./make.bash
Building Go cmd/dist using /usr/local/google/home/bcmills/go1.4.
Building Go toolchain1 using /usr/local/google/home/bcmills/go1.4.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for linux/amd64.
---
Installed Go for linux/amd64 in /tmp/tmp.NMau1mZTY7/go1
Installed commands in /tmp/tmp.NMau1mZTY7/go1/bin
go1/src$ cd ..
go1$ export GOROOT_BOOTSTRAP=$(pwd)
go1$ cd ../go2/src
go2/src$ ./make.bash
Building Go cmd/dist using /tmp/tmp.NMau1mZTY7/go1.
Building Go toolchain1 using /tmp/tmp.NMau1mZTY7/go1.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for linux/amd64.
---
Installed Go for linux/amd64 in /tmp/tmp.NMau1mZTY7/go2
Installed commands in /tmp/tmp.NMau1mZTY7/go2/bin
go2/src$
I filed #30758 (for a new test) and #30759 (to revert the cmd/coordinator workarounds before we ship Go 1.13)
Most helpful comment
Looks like that clears it up.
Sorry for the breakage, folks. I'll be sure to double check the clone-and-bootstrap operation and subrepo builds for the next attempt.