i find this while using https://github.com/kostya/run_with_fork. Crystal in after_fork_child_callbacks
calls Random::DEFAULT.new_seed
, but new seed for every fork get from Random::Secure.rand which is also the same for each fork. May be mix some Process.pid into seed?
require "uuid"
10.times do
fork do
p({ rand, UUID.random.to_s })
exit
end
end
sleep 1.0
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
{0.9253318028114554, "d74c282a-0c1d-402c-9e59-56bccf6d1e87"}
this fix rand, but not uuid
diff --git a/src/random/pcg32.cr b/src/random/pcg32.cr
index 2ad06cfb2..0e97fc78a 100644
--- a/src/random/pcg32.cr
+++ b/src/random/pcg32.cr
@@ -51,7 +51,7 @@ class Random::PCG32
end
def new_seed
- new_seed(Random::Secure.rand(UInt64::MIN..UInt64::MAX), Random::Secure.rand(UInt64::MIN..UInt64::MAX))
+ new_seed(Random::Secure.rand(UInt64::MIN..UInt64::MAX) + Process.pid, Random::Secure.rand(UInt64::MIN..UInt64::MAX))
end
def new_seed(initstate : UInt64, initseq = 0_u64)
{0.4068051713856044, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.021214472643506514, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.5559970727791962, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.7634387669881384, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.11903504329508376, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.7499252573512303, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.4117374945356687, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.41756437719645084, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.10160543317694361, "653031dd-c2bb-4b18-9282-069d60b762bf"}
{0.22087650254432137, "653031dd-c2bb-4b18-9282-069d60b762bf"}
The bug is that Random::Secure
returns the same bytes in each fork. This is supposed to be impossible.
The bug is this assumption:
urandom.sync = true # don't buffer bytes
This is false since sync = true
only turns off output buffering.
IO::Buffered#sync=
:Not a "wrong" assumption: the docs state numerous times that #sync=
will turn buffering on/off, and never state that either one will, for example:
The
IO::Buffered
mixin enhances anIO
with input/output buffering.
The buffering behaviour can be turned on/off with the#sync=
method.
The behavior of IO::Buffered#sync=
should apply to both reads and writes. I see no reason to only disable buffering for one but not the other. It can be complex to flush previously buffered read content —maybe raise an exception— but if the buffer is empty or nonexistent, it's possible.
I agree. I'll work on that.
This is fixed by https://github.com/crystal-lang/crystal/pull/5849 (right?)
Why read buffering causes this behaviour? I can't see the relationship between that and the incorrect behaviour of urandom.
The problem happens when forking the process. If you read from urandom and have a buffet, you fill the buffer, then have the exact same random sequences in each forked process (because they read from the buffer).
It's also a problem to have random data that is yet to be used saved in memory. Never underestimate a motivated attacker :)
@ysbaddaden Thanks for the explanation!
You're welcome.
Last bit: reading 8KB when all we need are 8 bytes to seed a PRNG, for example, that's a little too much :)
Last bit
I see what you did there :-)
just meet this bug again, now on ubuntu 12.04, crystal 0.26.0 (on osx no bug).
@kostya can't reproduce using the sample code above using ubuntu 12.04 in docker with crystal 0.26.1
can you try 0.26.1? Does the example still repro or do you need a new example to reproduce on the buggy system?
cat /etc/issue
Ubuntu 12.04.1 LTS \n \l
cat 1.cr
require "uuid"
10.times do
fork do
p({ rand, UUID.random.to_s })
exit
end
end
sleep 1.0
./crystal-0.26.1-1/bin/crystal --version
Crystal 0.26.1 [391785249] (2018-08-27)
LLVM: 4.0.0
Default target: x86_64-unknown-linux-gnu
./crystal-0.26.1-1/bin/crystal 1.cr
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
{0.5153176023881644, "af939b35-fafb-48f2-afe6-2e94c5444dbb"}
Reproduced on Ubuntu Trusty.
@kostya what's uname -a
?
Linux server 3.2.0-64-generic #97-Ubuntu SMP Wed Jun 4 22:04:21 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
ouch, thats an old kernel
Not reproduced too on my side:
{0.06269658600846163, "b170bcc7-b53a-4b76-9137-c49b905095de"}
{0.42220481484025657, "b2404322-dec8-42f1-b90e-d072f01564e4"}
{0.45164303491384145, "f9c3fe2d-aaa7-4e41-a95c-d00cb3458ca3"}
{0.18306748202534376, "3b9ba6c7-4039-4f1f-9109-327e16a9e393"}
{0.09124529032674435, "a313df45-3fa9-46ef-83f0-581e72d79f29"}
{0.398182840385005, "f412d23d-a660-4083-bc4b-3c0950913274"}
{0.5654764246689967, "b2e8aac6-01f3-47ab-b042-073c05d0b2b4"}
{0.22079585891069456, "90fe956b-b790-4b86-8337-3e8c426cd311"}
{0.1791654648436284, "876aa110-bfc9-48ea-a880-c9fdf7ac2b40"}
{0.039046438246567734, "00263787-b3e4-4b74-b206-48386b3bf3b7"}
Crystal 0.26.1 (2018-09-27)
LLVM: 6.0.1
Default target: x86_64-pc-linux-gnu
Linux jrei-pc 4.14.78-1-MANJARO #1 SMP PREEMPT Sun Oct 21 07:57:51 UTC 2018 x86_64 GNU/Linux
That's definitely linked to a /dev/random
change in the Linux kernel itself (that's why @RX14 can't reproduce it in Docker). LRNG changes aren't rare between releases: https://github.com/torvalds/linux/commits/master/drivers/char/random.c . I can't say what commit produces this change of behavior.
@kostya does this still reproduce for you? :)
I think this is fixed.
Most helpful comment
The problem happens when forking the process. If you read from urandom and have a buffet, you fill the buffer, then have the exact same random sequences in each forked process (because they read from the buffer).
It's also a problem to have random data that is yet to be used saved in memory. Never underestimate a motivated attacker :)