The following code gives different results on .NET Core and Framework:
float a = 0.281642526f;
float b = 0.844927669f;
float c = a + a + b;
float d = a + b + a;
Console.WriteLine(c);
Console.WriteLine(d);
.NET Framework:
1.408213
1.408213
.NET Core:
1.4082127
1.4082128
For fun, C++ gives the following:
1.408213, 1.408213
Shouldn't it be same?
cc @tannergooding
Computer floating-point (including but not limited to float, double, and even decimal) math does not work like normal math. Given that it doesn't have infinite precision, (a + b) + c and a + (b + c) may return different results.
This is because after each "operation" the result must be rounded to fit within the confines of the type.
Additionally, due to the underlying representation, not all values are "exactly" representable. float and double are represented using powers of 2 and so it can't represent something like 1.1 exactly, but it can represent 0.5 without any loss. decimal is represented using powers of 10 and so it can represent 1.1 exactly but can't accurately represent something like 1 / 3.
In the case of the two inputs, the compiler has to take the input "as given" and round to the nearest representable result. Due to the underlying representation I mentioned above, the input values (for float) are actually:
0.28164253 = 0.2816425263881683349609375; Raw Bits: 0x3E9033730.84492767 = 0.84492766857147216796875; Raw Bits: 0x3F584D2EIn the case of c, you are doing:
0.2816425263881683349609375 + 0.2816425263881683349609375 = 0.5632850527763366699218750.56328505 = 0.563285052776336669921875; Raw Bits: 0x3F1033730.563285052776336669921875 + 0.84492766857147216796875 = 1.4082127213478088378906251.4082127 = 1.4082126617431640625; Raw Bits: 0x3FB44050In the case of d you are doing:
0.2816425263881683349609375 + 0.84492766857147216796875 = 1.12657019495964050292968751.1265702 = 1.126570224761962890625; Raw Bits: 0x3F9033741.126570224761962890625 + 0.2816425263881683349609375 = 1.40821275115013122558593751.4082128 = 1.40821278095245361328125; Raw Bits: 0x3FB44051If you examine the raw bits on .NET Core (x86 or x64) or .NET Framework (x64) or C++ (x86 or x64), the results are actually equal. On .NET Framework (x86) d returns the same result as c because the legacy JIT uses the x87 FPU stack and doesn't insert an intermediate rounding step that is required for IEEE 754 compliance.
As for why they print differently when using ToString() its because the have different default "precisions". .NET Framework defaults to 7 significant digits. C/C++ default to 6 decimal digits after the decimal point.
In .NET Core, we change the algorithm to print the "shortest roundtrippable string" by default. This was done to ensure that x.Parse(x.ToString()) == x is true by default (for a floating-point x where x is not NaN).
This roundtripping wasn't the default behavior previously and it causes issues. For example: 1.408213 = 1.40821301937103271484375; Raw Bits: 0x3FB44053, which means that the string returned was off by 2. This isn't an issue in some cases, but it can be in others and we ultimately decided to take the break.
We have a blog post describing these changes in more detail here: https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/ and we document the breaks here as well: https://docs.microsoft.com/en-us/dotnet/core/compatibility/2.2-3.0#floating-point-formatting-and-parsing-behavior-changed
Most helpful comment
Computer floating-point (including but not limited to
float,double, and evendecimal) math does not work like normal math. Given that it doesn't have infinite precision,(a + b) + canda + (b + c)may return different results.This is because after each "operation" the result must be rounded to fit within the confines of the type.
Additionally, due to the underlying representation, not all values are "exactly" representable.
floatanddoubleare represented using powers of 2 and so it can't represent something like1.1exactly, but it can represent0.5without any loss.decimalis represented using powers of 10 and so it can represent1.1exactly but can't accurately represent something like1 / 3.In the case of the two inputs, the compiler has to take the input "as given" and round to the nearest representable result. Due to the underlying representation I mentioned above, the input values (for
float) are actually:0.28164253=0.2816425263881683349609375; Raw Bits:0x3E9033730.84492767=0.84492766857147216796875; Raw Bits:0x3F584D2EIn the case of
c, you are doing:0.2816425263881683349609375+0.2816425263881683349609375=0.5632850527763366699218750.56328505=0.563285052776336669921875; Raw Bits:0x3F1033730.563285052776336669921875+0.84492766857147216796875=1.4082127213478088378906251.4082127=1.4082126617431640625; Raw Bits:0x3FB44050In the case of
dyou are doing:0.2816425263881683349609375+0.84492766857147216796875=1.12657019495964050292968751.1265702=1.126570224761962890625; Raw Bits:0x3F9033741.126570224761962890625+0.2816425263881683349609375=1.40821275115013122558593751.4082128=1.40821278095245361328125; Raw Bits:0x3FB44051If you examine the raw bits on .NET Core (x86 or x64) or .NET Framework (x64) or C++ (x86 or x64), the results are actually equal. On .NET Framework (x86)
dreturns the same result ascbecause the legacy JIT uses the x87 FPU stack and doesn't insert an intermediate rounding step that is required for IEEE 754 compliance.As for why they print differently when using
ToString()its because the have different default "precisions". .NET Framework defaults to 7 significant digits. C/C++ default to 6 decimal digits after the decimal point.In .NET Core, we change the algorithm to print the "shortest roundtrippable string" by default. This was done to ensure that
x.Parse(x.ToString()) == xistrueby default (for a floating-pointxwherexis notNaN).This roundtripping wasn't the default behavior previously and it causes issues. For example:
1.408213=1.40821301937103271484375; Raw Bits:0x3FB44053, which means that the string returned was off by 2. This isn't an issue in some cases, but it can be in others and we ultimately decided to take the break.We have a blog post describing these changes in more detail here: https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/ and we document the breaks here as well: https://docs.microsoft.com/en-us/dotnet/core/compatibility/2.2-3.0#floating-point-formatting-and-parsing-behavior-changed