Arithmetic on int32 or uint32 does not truncate mod 2^32.
This leads to errors in any code that relies on the lowest 32 bits of integer arithmetic remaining correct over repeated multiplications, for example a simple RNG:
let rand (s:uint32) = (s*101001u+1234759u)
This can be fixed by using >>> 0 which is turned correctly into >>> 0 or >> 0 in the generated JS for unsigned or signed, and truncates.
Worse, lower bits will be wrong even for a single int32 multiplication:
0x7FFFFFFFu * 0x7FFFFFFFu
let z = (0x7FFFFFFFu * 0x7FFFFFFFu) &&& 0x1u
z should be 1u.
In REPL 2 and CLI z is 0u
Theoretically truncation should be forced after every uint32 or int32 arithmetic operation +, -, *, **. It might be OK to do this only once per loop, or on function return, though that would leave pathological examples as above it would catch most of the real use cases.
* cannot be done in one step using JS floats if it is to deliver accurate lower bits. The correct result can be implemented as something like:
let (al , ah) = (a &&& 0xFFFFu), (a >>> 16)
let (bl , bh) = (b &&& 0xFFFFu, (b >>> 16)
let aTimesb = (al*bl +((al*bh+bl*ah) <<< 16)) >>> 0
The problem is the same for signed int32.
I guess those using FABLE should accept that int32 is not bit accurate, although int64 is? Clear documentation needed for this if so since it is not obvious from the embedding into floats.
I think it would still be worthwhile truncating, so that only multiply causing large results leads to loss of resolution.
dotnet fable --version): 2.0.0-beta-001In fact multiply could probably better be implemented:
let limitNum = 1u <<< 20
if a < limitNum || b < limitNum then a * b >>> 0 else
let (al , ah) = (a &&& 0xFFFFu), (a >>> 16)
let (bl , bh) = (b &&& 0xFFFFu, (b >>> 16)
(al*bl +((al*bh+bl*ah) <<< 16)) >>> 0
the much slower version only gets invoked when both a and b are large. this makes the performance cost of accurate arithmetic significantly smaller.
This is where things start to get trickier :smile: I'm not opposed to improving overflow arithmetics but one of the original guidelines of Fable was to output readable JS and add minimal overhead. Long.js was contributed mainly because JS numbers don't support the full int64 range and this was needed to make the REPL work. However, the rest of the numbers (including decimal) are compiled to JS number (as mentioned in the documentation) and native operators are used when possible in a compromise to write code as performant as raw JS even if this sacrifices some semantics.
We can try to fix some specific cases, but it's difficult to know where to draw the line. Should we add array boundary checking or type testing in casts too? Another important point to consider is the development resources. Although modified, Long.js code was taken from another project (BigInt was taken from Fsharp.Core). Writing all the code necessary to fully comply with F# arithmetics puts pressure on maintain.
Maybe @ncave has any opinion on this?
Perhaps we can be as clear as possible about where the Fable semantics is not compatible with F# spec, and how? Otherwise I appreciate the trade-off. It is not a big pain doing the truncation in F# as long as it is known necessary.
@alfonsogarciacaro IMO it's easiest to just document the current overflow behavior. OTOH if we can somehow fix the overflow without big performance impact perhaps we can try, wrapping each int operator with truncation, or possibly using wasm i32 operators (but I don't see how to avoid the branching if (wasm) then wasm.mul()), so there will be some perf impact, question is how big. We should be able to see the difference right away on numeric-heavy code like the raytracer sample.
I agree it's a slippery slope, e.g. we don't implement any of the checked arithmetic operators either.
Ok, let's start by improving the documentation and when Fable 2 stable is released we can revisit this again :+1:
I'll close this - the documentation has another placeholder issue!