Go: runtime: "unexpected result from waitpid" in TestGdbPython

Created on 12 May 2020  Â·  28Comments  Â·  Source: golang/go

2020-05-12T15:01:56-a0698a6/linux-amd64-longtest

--- FAIL: TestGdbPanic (0.85s)
    runtime-gdb_test.go:71: gdb version 7.12
    runtime-gdb_test.go:616: could not find '(#.* .* in )?main\.crash' in backtrace
    runtime-gdb_test.go:617: gdb output:
        Loading Go Runtime support.
--- FAIL: TestGdbPython (0.98s)
    runtime-gdb_test.go:71: gdb version 7.12
    runtime-gdb_test.go:249: gdb output: Loading Go Runtime support.
        Loaded  Script                                                                 
        Yes     /workdir/go/src/runtime/runtime-gdb.py                                 
        Breakpoint 1 at 0x498865: file /workdir/tmp/go-build240230015/main.go, line 16.
        warning: linux_test_for_tracefork: unexpected result from waitpid (52539, status 0x1c7f)
        hi

    runtime-gdb_test.go:278: info goroutines failed: 
FAIL
FAIL    runtime 157.848s

This is the only occurrence I've seen so far. CC @ianlancetaylor in case it's somehow related to CL 232862, which went in one CL before this failure.

NeedsFix release-blocker

Most helpful comment

Aha! I should read kill(2) more closely: "If pid equals 0, then sig is sent to every process in the process group of the calling process."

The original blamed http://golang.org/cl/232862 has a new test with:

    // Send ourselves SIGWINCH regularly.
    go func() {
        for range time.Tick(100 * time.Microsecond) {
            syscall.Kill(0, syscall.SIGWINCH)
        }
    }()

So that is presumably the source of the signals. I'll send a workaround.

All 28 comments

I also got the same problem here:

https://storage.googleapis.com/go-build-log/3d4facca/linux-amd64-staticlockranking_9d45d3a8.log

It may be more likely when static lock ranking is enabled (because it is a timing issue?). I am also getting on some local staticlockranking runs.

That one on linux-s390x-ibm looks related to #39025.

There is also an info goroutines failed: failure mode in TestGdbPythonCgo that is likely related:
2020-05-12T19:15:34-cb11c98/linux-amd64-sid
2020-05-12T19:15:34-cb11c98/linux-mips64le-mengzhuo

--- FAIL: TestGdbPythonCgo (1.58s)
    runtime-gdb_test.go:71: gdb version 8.3
    runtime-gdb_test.go:249: gdb output: Loading Go Runtime support.
        Loading Go Runtime support.
        Breakpoint 1 at 0x49a365: file /workdir/tmp/go-build125523894/main.go, line 19.

    runtime-gdb_test.go:278: info goroutines failed: 
FAIL
FAIL    runtime 32.606s

Attempting to bisect now, but I'm not hopeful because of the existing flakiness of the test (#24616).

I'm unable to reproduce the failures at all on my local workstation using GDB 8.3.1 and go test -count=1024 '-run=TestGdb.*' -timeout=2h runtime.

~/go-review$ uname -srv
Linux 5.2.17-1rodete3-amd64 #1 SMP Debian 5.2.17-1rodete3 (2019-10-21 > 2018)

~/go-review$ gdb --version
GNU gdb (Debian 8.3.1-1) 8.3.1

~/go-review$ python --version
Python 2.7.18rc1

I'm not sure how to proceed from here.

/cc @ianlancetaylor

I've gotten exactly one local repro so far. I'm not sure what conditions are required to trigger it — system load seems to be a factor.

However, the only repro I have gotten was without a test filter, which is even more difficult to use for bisection due to the presence of other flaky runtime tests (such as #37201, which occurred on the next attempt).

~/go-review$ go test -short -failfast -count=256 -timeout=2h runtime
--- FAIL: TestGdbConst (0.55s)
    runtime-gdb_test.go:71: gdb version 8.3
    runtime-gdb_test.go:549: output "Loading Go Runtime support.\nBreakpoint 1 at 0x4667e0: file /tmp/go-build364032258/main.go, line 8.\n"
    runtime-gdb_test.go:552: output mismatch
FAIL
FAIL    runtime 10.528s
FAIL

If this is due to EINTR, might pummeling the process with signals help?

An interesting idea, but it doesn't seem to help.

@bcmills Just a friendly ping to check on the status of this as we work through our beta blocking issues.

I don't know how to fix this and the tests are so nondeterministic that I can't figure out how to even bisect it.

I plan to file a proposal (later today?) to relocate these tests so that they are not run as part of go test runtime and all.bash, unless someone is going to step in to fix these otherwise-apparently-unmaintained tests.

CL 232862 does seem like a likely candidate based on the CLs that landed around the time this issue started. Perhaps we should temporarily revert it and see if the problem disappears given that bisecting appears fruitless?

Change https://golang.org/cl/235282 mentions this issue: runtime: check gdb exit status and log output

After ~10 runs of go test runtime, I managed to repro this with golang.org/cl/235282 on an s390 gomote:

$ gomote run user-mpratt-linux-s390x-ibm-0 go/bin/go test -count=1 runtime                                                                   
--- FAIL: TestGdbBacktrace (2.29s)
    runtime-gdb_test.go:71: gdb version 8.2
    runtime-gdb_test.go:395: gdb output:
        Loading Go Runtime support.
        Breakpoint 1 at 0x6e404: file <autogenerated>, line 1.
    runtime-gdb_test.go:397: gdb exited with error: signal: segmentation fault (core dumped)                                                 
FAIL       
FAIL    runtime 226.306s
FAIL       
Error running run: exit status 1  

So GDB is segfaulting... I'll see if I can manage to find the core file.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000003ffb9c009f8 in fileno_unlocked () from /lib64/libc.so.6

Thread 1 (Thread 0x3ffbade6bb0 (LWP 1048)):
#0  0x000003ffb9c009f8 in fileno_unlocked () from /lib64/libc.so.6
#1  0x000003ffbacb71be in rl_resize_terminal () from /lib64/libreadline.so.7
#2  0x000002aa08763246 in tui_async_resize_screen(void*) ()
#3  0x000002aa088dca18 in invoke_async_signal_handlers() ()
#4  0x000002aa088dd57a in gdb_do_one_event() ()
#5  0x000002aa08a23e04 in wait_sync_command_done() ()
#6  0x000002aa08a23ffc in execute_command(char const*, int) ()
#7  0x000002aa08947f26 in catch_command_errors(void (*)(char const*, int), char const*, int) ()
#8  0x000002aa0894935e in gdb_main(captured_main_args*) ()
#9  0x000002aa08686f06 in main ()

It seems like this must be some kind of GDB bug, though perhaps one we a tickling somehow.

Looking at GDB source code, linux_test_for_tracefork: unexpected result from waitpid is GDB internal. It is part of GDB checking ptrace features, and the child is also part of GDB itself, not user program. It doesn't look like anything related to Go.

Just to be sure, is the GDB binary on the builder changed recently, like upgraded from one version to another?

GDB 7.12 is released on 2016. Should we try some newer version of GDB on the builders?

FWIW, the crashes on the s390 builder are on GDB 8.2, which is a bit newer (2018).

Digging into the https://github.com/golang/go/issues/39021#issuecomment-634303519 crash a bit more:

  • rl_resize_terminal will crash if rl_initialize is never called (i.e., initialize readline).
  • gdb -batch -ex 'p 1' never calls rl_initialize (at least with HEAD gdb).
  • It _does_ call tui_interp::init, which installs a SIGWINCH signal handler for resizing.

Thus is seems it should be possible to trigger this crash by sending a SIGWINCH to a batch mode GDB, though I haven't managed to do so.

@chlunde noted the same SIGWINCH bug in an external conversation. It's not clear to me whether that bug is related to this test failure.

I've dumped all of the core files on the s390 builder, and they all have the same stack trace as https://github.com/golang/go/issues/39021#issuecomment-634303519

$ gomote run user-mpratt-linux-s390x-ibm-0 /usr/bin/coredumpctl list gdb | tr -s ' ' | cut -f 5 -d ' ' | xargs -n 1 gomote run user-mpratt-linux-s390x-ibm-0 /usr/bin/coredumpctl info | c++filt > /tmp/stacks.txt

Though it remains to be seen if _all_ of the GDB failures are SEGVs with core files. Now that https://golang.org/cl/235282 is in we should be getting more information on new failures.

Two new segfaults:

2020-05-27T19:54:12-0d20a49/linux-amd64-staticlockranking
2020-05-27T19:54:12-0d20a49/linux-mips64le-mengzhuo

This is trivially reproducible with my system GDB with:

package main

import (
        "fmt"
        "os/exec"
        "syscall"
        "time"
)

func main() {
        cmd := exec.Command("gdb", "-nx", "-batch", "-ex", "run", "--args", "sleep", "60")
        go func() {
                // XXX: This isn't safe!
                time.Sleep(1*time.Second)
                fmt.Println("Sending SIGWINCH...")
                syscall.Kill(cmd.Process.Pid, syscall.SIGWINCH)
        }()
        got, err := cmd.CombinedOutput()
        fmt.Printf("output: %q, err: %v\n", got, err)
}
$ ./gdb
Sending SIGWINCH...
output: "", err: signal: segmentation fault

I'll see if I can report this upstream.

My guess is that something has changed recently that is causing SIGWINCH to get sent to processes on the builders. I'm not sure what that would be though; AFAIK it is usually just desktop environments.

EDIT: CombinedOutput sets stdin to /dev/null and stdout to a pipe, so the bash equivalent of this is: gdb -nx -batch -ex 'run' --args sleep 60 </dev/null 2>&1 | cat. This also dies when signalled. Though at HEAD it dies with SIGABRT because they've added a SIGSEGV handler that calls abort.

Thus far, the only true workaround I've found for this issue is to set GDB's stdin to /dev/tty. However, that's pretty ugly w.r.t. possible interactions with actual terminals. Not to mention it is probably annoying to write such that it works on all platforms (no idea if Darwin, Windows, etc have /dev/tty).

On the other hand, why are we getting SIGWINCH in the first place? This test sends SIGWINCH to the entire process group, so if it runs in parallel to the GDB tests and ends up in the same process group (not sure about that?), then it could be signaling GDB. It is a pretty old test though, and this seems to be a new issue.

Aha! I should read kill(2) more closely: "If pid equals 0, then sig is sent to every process in the process group of the calling process."

The original blamed http://golang.org/cl/232862 has a new test with:

    // Send ourselves SIGWINCH regularly.
    go func() {
        for range time.Tick(100 * time.Microsecond) {
            syscall.Kill(0, syscall.SIGWINCH)
        }
    }()

So that is presumably the source of the signals. I'll send a workaround.

Argh, sorry about that, don't know what I was thinking.

Change https://golang.org/cl/235557 mentions this issue: runtime: start GDB in a new process group

Could we run tests that sending signals to the process group not in parallel with other tests?

Was this page helpful?
0 / 5 - 0 ratings