Here is a minimal example (using the foo function as defined in https://docs.julialang.org/en/v1/manual/functions/index.html). REPL gives a wrong result almost every time I run it (sometimes the first, sometimes the second return value is wrong there)! I don't think that it is an issue with a particular function, as I had observed the same behaviour with another function as well. For functions with a single output/return it seems to consistently give a correct result.
The correct return should be (4.4, 4.59).
julia> function foo(a,b)
a+b, a*b
end
foo (generic function with 1 method)
julia> ( return_1 , return_2 ) = foo( 1.7 , 2.7 )
(4.8, 4.59)
julia> ( return_1 , return_2 ) = foo( 1.7 , 2.7 )
(4.4, 4.59)
julia> ( return_1 , return_2 ) = foo( 1.7 , 2.7 )
(4.4, 4.82)
julia> return_2
4.59
julia> versioninfo()
Julia Version 1.0.1
Commit 0d713926f8 (2018-09-29 19:05 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, skylake)
Environment:
JULIA_EDITOR = "C:\JuliaPro-1.0.1.1\app-1.29.0\atom.exe" -a
JULIA_NUM_THREADS = 2
JULIA_PKG_SERVER = https://pkg.juliacomputing.com/
JULIA_PKG_TOKEN_PATH = C:\Users\Peter\.julia\token.toml
Are you using the Juno REPL? I vaguely recall that Juno sometimes show
d bogus output in the REPL, cc @pfitzseb
yes, I have JuliaPro with the default Atom/Juno editor.
Please report to https://github.com/JunoLab/Juno.jl (unless you can reproduce this in the normal REPL).
Yeah, I can (kind of) repro this. It's a super weird bug which might even be caused by Base, but not sure.
With https://github.com/JunoLab/Atom.jl/commit/4598ce36805f4842aa4b7fee34ede9095a1cf1fd I can't repro this anymore, so fingers crossed.
There's a race condition in the design of the Grisu output: we re-use the same buffer (DIGITS), even though that buffer might already be in use (missing a lock). We can simulate this failure pretty easily:
julia> p = Pipe()
Pipe(RawFD(0xffffffff) init => RawFD(0xffffffff) init, 0 bytes waiting)
julia> Base.link_pipe!(p, reader_supports_async=true, writer_supports_async=true)
Pipe(RawFD(0x00000013) open => RawFD(0x00000011) open, 0 bytes waiting)
julia> t1 = @async write(p, zeros(UInt8, 2^18))
Task (runnable) @0x00007f54b1e08eb0
julia> t1
Task (runnable) @0x00007f54b1e08eb0
julia> t2 = @async (print(p, 12.345); close(p.in))
Task (runnable) @0x00007f54b2c17820
julia> t2
Task (runnable) @0x00007f54b2c17820
julia> 9.8
9.8
julia> read(p, 2^18);
julia> read(p, String)
"98.345"
Is there some way to work around this on our side? Just wrapping some random code in an @async
and hoping the race condition doesn't trigger seems iffy...
we should just make the DIGITS buffer task local
Is there some way to work around this on our side?
In places where you control the output, you can add an explicit call to string(X)
to the output (or equivalently use a temporary PipeBuffer) and pass that temporary object to do the final blocking write(io, s)
output call in one step (where s
is either a PipeBuffer
object or a String
).
Yeah, I can (kind of) repro this. It's a super weird bug which might even be caused by Base, but not sure.
With JunoLab/Atom.jl@4598ce3 I can't repro this anymore, so fingers crossed.
Indeed Juno developers said there was an issue with syncing of the Workspace Pane, see
https://github.com/JunoLab/Juno.jl/issues/185
The underlying issue is still in Base, and the workaround in that commit seems to make the race condition less likely to trigger. So this issue definitely isn't fixed for good on the Juno side.
It will be an issue for multithreaded I/O, however, which is coming soon.
Threads won't make much difference here since the buffers are already thread-local.
Most helpful comment
There's a race condition in the design of the Grisu output: we re-use the same buffer (DIGITS), even though that buffer might already be in use (missing a lock). We can simulate this failure pretty easily: