sync/atomic: go1.8 regression: atomic.AddUint64 is unreliable
Sometimes atomic.AddUint64
has no effect.
Update: @cespare has discovered that the loop containing atomic.AddUint64
has been elided.
Update: @rjeczalik has reported that this behavior also occurs with atomic.StoreUint64
and atomic.CompareAndSwapUint64
functions.
package main
import (
"fmt"
"runtime"
"sync/atomic"
"time"
)
var a uint64 = 0
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
fmt.Println(runtime.NumCPU(), runtime.GOMAXPROCS(0))
go func() {
for {
atomic.AddUint64(&a, uint64(1))
}
}()
for {
val := atomic.LoadUint64(&a)
fmt.Println(val)
time.Sleep(time.Second)
}
}
For go1.4, go1.5, go1.6, and go1.7:
The atomic.AddUint64(&a, uint64(1))
statement works as expected.
$ go version
go version go1.4-bootstrap-20161024 linux/amd64
go version go1.5.4 linux/amd64
go version go1.6.4 linux/amd64
go version go1.7.5 linux/amd64
$ go build atomic.go && ./atomic
4 4
0
96231831
192599210
289043510
385369439
481772231
578143106
674509741
770966820
867408361
963866833
1060299901
<SNIP>
^C
For go1.8 and go tip:
The atomic.AddUint64(&a, uint64(1))
statement appears to have no effect.
go version go1.8 linux/amd64
go version devel +1e69aef Sat Feb 18 19:01:08 2017 +0000 linux/amd64
go version devel +1e69aef Sat Feb 18 19:01:08 2017 +0000 windows/amd64
$ uname -a
Linux peter 4.8.0-36-generic #36~16.04.1-Ubuntu SMP Sun Feb 5 09:39:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/peter/gopath"
GORACE=""
GOROOT="/home/peter/go"
GOTOOLDIR="/home/peter/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build842347949=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="0"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
$ go build atomic.go && ./atomic
4 4
0
0
0
0
0
0
0
0
0
<SNIP>
^C
Interestingly, we can make go1.8 and go tip work with a simple code modification that should not have any effect:
package main
import (
"fmt"
"runtime"
"sync/atomic"
"time"
)
var a uint64 = 0
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
fmt.Println(runtime.NumCPU(), runtime.GOMAXPROCS(0))
go func() {
for {
new := atomic.AddUint64(&a, uint64(1))
if new == 0 {
runtime.Gosched() // or print()
}
}
}()
for {
val := atomic.LoadUint64(&a)
fmt.Println(val)
time.Sleep(time.Second)
}
}
$ go version
go version go1.8 linux/amd64
$ go build atomic_pass.go && ./atomic_pass
4 4
0
126492073
253613883
378888824
506065798
633247293
760383560
887553077
1014723419
<SNIP>
^C
After AddUint64(&a, uint64(1)
, new
should be greater than zero and runtime.Gosched()
should not be executed. Substituting print()
for runtime.Gosched()
also works. If the line runtime.Gosched() // or print()
is commented out the program reverts to failure.
Hmm. Let me tag some runtime and sync experts @aclements @dvyukov @randall77.
Looking at the assembly, the increment (and in fact the whole for loop) has been (over-)optimized away.
(Edit: I missed the jmp initially -- as @dvyukov points out, there is an empty loop.)
I'll bisect.
A git bisect indicates d6098e4277bab633c2df752ed90e1e826918ca67.
cmd/compile: intrinsify sync/atomic for amd64
Uses the same implementation as runtime/internal/atomic.
Reorganize the intrinsic detector to make it more table-driven.
Also works on amd64p32.
/cc @randall77
This bug applies also to atomic.StoreUint64
and atomic.CompareAndSwapUint64
functions, they also reproduce the buggy behaviour shown in the atomic.go repro. Just a heads up, as it may be not obvious just by reading the description.
The atomic.AddUint64 loop is compiled to an empty loop.
Strictly saying this is confirming behavior as we don't give any guarantees about scheduler (just like any other language, e.g. C++). This can be explained as "the goroutine is just not scheduled".
However I think we should fix this and not compile this to an empty loop. Akin to the C++ guarantee of "atomic writes should be visible to other threads in a finite amount of time". It is OK to combine up to inifinte number of _non_ atomic writes, because that's invisible without data races, and I am not a fan of making programs with data races any more useful. But side effects of atomic writes are observable without data races, so it's fine to combine any finite number of atomic writes (e.g. atomic.Store(&x, 1); atomic.Store(&x, 2)
can be compiled as atomic.Store(&x, 2)
), but not an infinite number of atomic writes. Language implementation become useless when they take formal properties to the extreme.
Is the runtime miscompiled like this?
I don't think the runtime contains any loops like that, so yes (same optimizations) and no (no code matching this amusing optimization).
Is this breaking any real applications? I had to write a compiler test program (for loop rescheduling, hence could not contain any function calls) very carefully to ensure that the loop remained, but that was only a test.
Is this breaking any real applications?
@dr2chase: I'm using a for-loop with atomic read+cas, so I guess in go1.8 it won't work anymore: index.go#L55-L62.
@rjeczalik from @dvyukov's description above, I don't think that type of code would be affected (the loop has an exit condition).
Is this breaking any real applications?
The code in the original post is real enough. This program has side-effect – it prints to stdout. Someone can use this output to produce different results.
It also affects AddUintptr/LoadUintptr
, AddInt32/LoadInt32
and AddUint32/LoadUint32
.
Isn't 0 a valid value after the loop has executed 2^64 times and the atomic has overflowed?
Isn't 0 a valid value after the loop has executed 2^64 times and the atomic has overflowed?
Yeah...after 584 years (at 1 ns / loop).
This is perfectly valid given Go's memory model. Are there any plans to update the memory model to include happens-before semantics on the atomic operations?
@nikdeapen there is always #5045: "define how sync/atomic interacts with memory model".
However, I'm 99% certain that any memory model updates for #5045 won't add more happens-before edges to this program.
The issue here is more one of visibility -- as @dvyukov said, the expectation is something like "atomic writes should be visible to other threads in a finite amount of time". Perhaps something similar to that would be codified in a future MM change regarding sync/atomic.
@josharian where are you getting the 1ns from? Is there a minimum amount of time an operation needs to take?
@cespare thanks for the reply.
Java has volatile (and atomic) variables which provide happens-before semantics that allow them to be used as flags and other types of synchronizers. In Go you would also have provide some type of synchronization along with the atomic variable to ensure memory visibility. But if you are already using another type of synchronization it seems unlikely that you would even need the atomic update. Maybe i'm misssing something, but I just don't see too many realistic use cases for atomic variables if they don't have happens-before semantics.
The "atomic writes should be visible to other threads in a finite amount of time" statement seem too "maybe-maybe-not" for a programming language. It seems like this would lead to bugs that are nearly impossible to test for.
I'd love to see happens-before on atomics but mutexes and channels work great for now.
Happens-before is unrelated to this case. Even if we document happens-before relations for atomic variables (note that currently they are still implied), there is nothing that will force the loop to see the updates as there is yet no any happens-before relations. Happens-before is only about visibility of secondary/dependent data, not about primary visibility.
To guarantee that the loop prints increasing values over times we need 2 guarantees:
@dr2chase I can imagine some real uses of this (though, probably not very representative for Go). For example, during very fine-grained parallelization you may have a dedicated goroutine running on the HT sibling spin wait for input using atomic.Load, do some (inline) computations and store result back using atomic.Store, and then repeat.
However, now I realized that any such program will necessary hang dead during GC (as the loop is not preemptible even with 1.7). And since we have forced GCs every few minutes, not allocating will not help. So I bet the original program hangs dead with 1.7 after 5 minutes, which can't be considered as "working".
So I bet the original program hangs dead with 1.7 after 5 minutes
Can't verify that – running it on macOS for an hour now, it's working, numbers are increasing.
"Atomic writes are visible to other goroutine in a finite amount of time."
Memory model rules are usually framed as "if a then b" as opposed to "time
and single events happen". I did a search for the above rule and couldn't
immediately find it in any C/C++ standards doc. A reference would be of
interest. I am also skeptical that a HW architectural spec would discuss
time and contain the guarantees we would need.
On Tue, Feb 21, 2017 at 11:51 AM, Alexey Palazhchenko <
[email protected]> wrote:
So I bet the original program hangs dead with 1.7 after 5 minutes
Can't verify that – running it on macOS for an hour now, it's working,
numbers are increasing.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/19182#issuecomment-281403718, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA7Wn5r4ATQWoIowS1-LY3u8XIBDoqa5ks5rexYDgaJpZM4MFV4Y
.
I think the "hang in 5 minutes" estimate was based on a conservative guess about GC frequency in usual programs (as opposed to this tiny test). Throw a little gratuitous memory allocation in there, it will hang soon enough. The test program I carefully wrote to avoid this was for the rescheduling-check-on-loop-backedges experiment; default compilation with no rescheduling check, call-free infinite loops cause GC to block forever.
CL https://golang.org/cl/37333 mentions this issue.
@RLH The phrase to search for in the C++ standard is "forward progress."
As far as I can tell the C++ standard does not require such a program to complete, but it does "encourage" it.
If a thread offers _concurrent forward progress guarantee_, it will _make progress_ (as defined above) in finite amount of time, for as long as it has not terminated, regardless of whether other threads (if any) are making progress.
The standard encourages, but doesn't require that the main thread and the threads started by std::thread offer concurrent forward progress guarantee.
By-the-way, try that program on a uniprocessor, let me know what it prints.
Yes thanks, it looks like the C++ documentation would consider a comparable
C++ program to be a valid program and the atomic operation is eventually
observable.
-- break glass here --
From http://en.cppreference.com/w/cpp/language/memory_model
In a valid C++ program, every thread eventually does one of the following:
No thread of execution can execute forever without performing any of these
observable behaviors.
Note that it means that a program with endless recursion or endless loop
(whether implemented as a for-statement
http://en.cppreference.com/w/cpp/language/for or by looping goto
http://en.cppreference.com/w/cpp/language/goto or otherwise) has undefined
behavior http://en.cppreference.com/w/cpp/language/ub. This allows the
compilers to remove all loops that have no observable behavior, without
having to prove that they would eventually terminate.
A thread is said to make progress if it performs one of the execution
steps above (I/O, volatile, atomic, or synchronization), blocks in a
standard library function, or calls an atomic lock-free function that does
not complete because of a non-blocked concurrent thread.
On Tue, Feb 21, 2017 at 4:11 PM, dr2chase notifications@github.com wrote:
By-the-way, try that program on a uniprocessor, let me know what it prints.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/19182#issuecomment-281481663, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA7Wn_8CX9lKFzcxV5YGGFxZBLVrNLZYks5re1L6gaJpZM4MFV4Y
.
@dr2chase 5 minutes is actually 2 minutes, and it is based on:
// proc.go
var forcegcperiod int64 = 2 * 60 * 1e9
@RLH
Memory model rules are usually framed as "if a then b" as opposed to "time
and single events happen". I did a search for the above rule and couldn't
immediately find it in any C/C++ standards doc. A reference would be of
interest.
1.10/25
An implementation should ensure that the last value (in modification order) assigned by an atomic or
synchronization operation will become visible to all other threads in a finite period of time.
Yes, it's atypical, but it's still better to at least document the intention rather than say nothing. This is essentially the limit of what you can say in a formal language spec.
Re-opening for cherry-pick to 1.8.1.
CL https://golang.org/cl/39595 mentions this issue.
Cherry-picked to release.
Most helpful comment
The atomic.AddUint64 loop is compiled to an empty loop.
Strictly saying this is confirming behavior as we don't give any guarantees about scheduler (just like any other language, e.g. C++). This can be explained as "the goroutine is just not scheduled".
However I think we should fix this and not compile this to an empty loop. Akin to the C++ guarantee of "atomic writes should be visible to other threads in a finite amount of time". It is OK to combine up to inifinte number of _non_ atomic writes, because that's invisible without data races, and I am not a fan of making programs with data races any more useful. But side effects of atomic writes are observable without data races, so it's fine to combine any finite number of atomic writes (e.g.
atomic.Store(&x, 1); atomic.Store(&x, 2)
can be compiled asatomic.Store(&x, 2)
), but not an infinite number of atomic writes. Language implementation become useless when they take formal properties to the extreme.