double.ToString and float.ToString have a wrong behavior in the following cases
Console.WriteLine($"Expected 0, got : {((double)(-0.0)).ToString()}"); //prints "-0"
Console.WriteLine($"Expected 0, got : {((float)(-0.0)).ToString()}"); //prints "-0"
Console.WriteLine($"Expected 0, got : {((decimal)(-0.0)).ToString()}"); //Correct behavior, prints "0"
Console.WriteLine($"Expected 0, got : {(Math.Round(-0.0)).ToString()}"); //prints "-0"
Console.WriteLine($"Expected 0, got : {((int)(-0.0)).ToString()}"); //Correct behavior, prints "0"
Console.WriteLine($"Expected 0, got : {((int)Math.Round(-0.0)).ToString()}"); //Correct behavior, prints "0"
Console.WriteLine($"Expected 0, got : {((double)(+0.0)).ToString()}"); //Correct behavior, prints 0
Console.WriteLine($"Expected True, got : {0.0 == -0.0}"); //Correct behavior, prints True
this is on .net core version 3.0.100-preview3-010431
@bigworld12 the -0 is a valid floating point value per IEEE754. We emit it into the string now so it can round-trip without loss.
@tannergooding
but shouldn't the default behavior be to not include the negative sign for 0 ?
specially since 0.0 == -0.0 is true
so parsing "-0.0" should give the same result as parsing "0.0"
i am not really fond of having -0.0 as a valid value, even if the standard specifies it as such, it's not pragmatic, and this will add many unnecessary special cases
looks like the negative zero was also used in previous .net framework versions, so breaking the standard now shouldn't be an option since it will break backward compatibility and basic networking communications.
so i think at least have the .ToString default remove the negative sign
but shouldn't the default behavior be to not include the negative sign for 0 ?
specially since 0.0 == -0.0 is true
so parsing "-0.0" should give the same result as parsing "0.0"
Yes, +0.0 == -0.0
, but that does not mean that they are behave the same when given as inputs to various operations. There are many operations (including simple ones like multiplication and division) where -0.0
is either returned or impacts the result of an operation. For example 1.0 / -0.0
is defined to return -Infinity
.
i am not really fond of having -0.0 as a valid value, even if the standard specifies it as such
The IEEE 754 specification, which defines the binary-floating point formats (such as binary32
and binary64
, which the runtime and C# language specifications map to System.Single
and System.Double
, respectively) and their behaviors, specifies:
The conversions (described in 5.4.2) from supported formats to external character sequences and back that recover the original floating-point representation, shall recover zeros, infinities, and quiet NaNs, as well as non-zero finite numbers. In particular, signs of zeros and infinities are preserved.
So, we are aligning with the official standard which specifies that signs of zeros should be preserved both when converting to and from an external character sequence (i.e. a string
).
it's not pragmatic, and this will add many unnecessary special cases
You would already have had some of these special cases as the language compilers (such as C#, VB, or F#; as well as non .NET languages such as C/C++, Python, Javascript, Java, or Rust) will correctly preserve the sign when parsing either -0.0
or 0.0
. Most other frameworks (and now .NET in netcoreapp3.0 and higher) would likewise have correctly preserved the sign when parsing these values using the built-in floating-point parsing functions (in the case of .NET these are System.Double.Parse
and System.Single.Parse
)
These values can be useful in a number of scenarios and hiding the fact that you are dealing with a value that may change the behavior of your underlying algorithm makes it more difficult to debug or diagnose various issues. Attempting to normalize these values such that they are always +0.0 would be costly and would prevent them from being used where they do provide additional benefits.
looks like the negative zero was also used in previous .net framework versions, so breaking the standard now shouldn't be an option since it will break backward compatibility and basic networking communications.
.NET Core is willing to make breaking changes (when they make sense) in major versions. This is one of the cases where such a break makes sense as it aligns us with the IEEE 754 specification, aligns us with what other languages/frameworks are doing when handling these values, and allows us to resolve a number of bugs that have been filed due to the parsing/formatting behavior being non-compliant/differing from the behavior of other languages/frameworks that implement IEEE 754. We are trying to ensure that cases like this are made publicly visible and elaborated to users such as via: https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/
then I suggest you explicitly state that -0.0 is a thing, and at least provide some work arounds in the documentation
In Double and float, it is not possible to express exact values per definition (unlike decimal). If you have a calculation that will result in 0.0 as result, it might be internally stored as -0.00000000001.
Should the sign be removed then, or must it be even closer to zero? How close to zero must it be for us to remove the sign? In some types of calculations the sign may be important, in others it is not important.
I understand your question, but this is as it should be.
I don't mind having that behavior, but I think users will hate getting surprised by things like this
if this is at least explicitly documented I wouldn't have any problems
but i still don't like some inconsistencies.
e.g.
1 / -0.0 = -inf
but
1 / -inf = 0.0
-double.Epsilon /2.0 = 0
not -0
but i still don't like some inconsistencies.
These are all cases that are spec'd as returning -0
and which can be shown to return -0
(even on .NET Framework). Take for example the following program. It is very easy to miss that some of these are returning -0
on .NET Framework because double.ToString
just prints "0" and you can only detect that it is negative zero by explicitly checking the sign or printing the raw bits.
Console.WriteLine($"{0.0,-25}{BitConverter.DoubleToInt64Bits(0.0):X16}");
Console.WriteLine($"{-0.0,-25}{BitConverter.DoubleToInt64Bits(-0.0):X16}");
Console.WriteLine($"{double.PositiveInfinity,-25}{BitConverter.DoubleToInt64Bits(double.PositiveInfinity):X16}");
Console.WriteLine($"{double.NegativeInfinity,-25}{BitConverter.DoubleToInt64Bits(double.NegativeInfinity):X16}");
Console.WriteLine($"{double.Epsilon,-25}{BitConverter.DoubleToInt64Bits(double.Epsilon):X16}");
Console.WriteLine($"{-double.Epsilon,-25}{BitConverter.DoubleToInt64Bits(-double.Epsilon):X16}");
Console.WriteLine($"{1 / 0.0,-25}{BitConverter.DoubleToInt64Bits(1 / 0.0):X16}");
Console.WriteLine($"{1 / -0.0,-25}{BitConverter.DoubleToInt64Bits(1 / -0.0):X16}");
Console.WriteLine($"{1 / double.PositiveInfinity,-25}{BitConverter.DoubleToInt64Bits(1 / double.PositiveInfinity):X16}");
Console.WriteLine($"{1 / double.NegativeInfinity,-25}{BitConverter.DoubleToInt64Bits(1 / double.NegativeInfinity):X16}");
Console.WriteLine($"{double.Epsilon / 2.0,-25}{BitConverter.DoubleToInt64Bits(double.Epsilon / 2.0):X16}");
Console.WriteLine($"{-double.Epsilon / 2.0,-25}{BitConverter.DoubleToInt64Bits(-double.Epsilon / 2.0):X16}");
.NET Framework (all versions) and .NET Core (prior to 3.0) prints the following (added comments and spacing for clarity):
0 0000000000000000 // 0.0 = Positive Zero
0 8000000000000000 // -0.0 = Negative Zero
โ 7FF0000000000000 // double.PositiveInfinity = Positive Infinity
-โ FFF0000000000000 // double.NegativeInfinity = Negative Infinity
4.94065645841247E-324 0000000000000001 // double.Epsilon = Positive Epsilon
-4.94065645841247E-324 8000000000000001 // -double.Epsilon = Negative Epsilon
โ 7FF0000000000000 // 1 / 0.0 = Positive Infinity
-โ FFF0000000000000 // 1 / -0.0 = Negative Infinity
0 0000000000000000 // 1 / double.PositiveInfinity = Positive Zero
0 8000000000000000 // 1 / double.NegativeInfinity = Negative Zero
0 0000000000000000 // double.Epsilon / 2.0 = Positive Zero
0 8000000000000000 // -double.Epsilon / 2.0 = Negative Zero
.NET Core 3.0 and later (and any other framework using the shared sources) prints the following:
0 0000000000000000 // 0.0 = Positive Zero
-0 8000000000000000 // -0.0 = Negative Zero
โ 7FF0000000000000 // double.PositiveInfinity = Positive Infinity
-โ FFF0000000000000 // double.NegativeInfinity = Negative Infinity
5E-324 0000000000000001 // double.Epsilon = Positive Epsilon
-5E-324 8000000000000001 // -double.Epsilon = Negative Epsilon
โ 7FF0000000000000 // 1 / 0.0 = Positive Infinity
-โ FFF0000000000000 // 1 / -0.0 = Negative Infinity
0 0000000000000000 // 1 / double.PositiveInfinity = Positive Zero
-0 8000000000000000 // 1 / double.NegativeInfinity = Negative Zero
0 0000000000000000 // double.Epsilon / 2.0 = Positive Zero
-0 8000000000000000 // -double.Epsilon / 2.0 = Negative Zero
@bigworld12 this is just introducing IEEE754 compliance, and being pragmatic like you requested is the wrong way to go IMHO. If users are surprised, it's because they're not familiar with the standard, and the framework shouldn't do things 'wrong' just because it's less confusing to people who don't know how it's supposed to work.
@ericsampson I also think that doing the right thing is more important, but this is a sudden breaking change, so it needs to be explicitly stated which behaviors should change and which shouldn't. the -0 output can confuse alot of users who aren't used to the IEEE standard, so I think this needs to be better documented.
the -0 output can confuse alot of users who aren't used to the IEEE standard, so I think this needs to be better documented.
No, some users simply need to be better educated. dotnet is not written for failing college students.
@JohnHolmesII in this repo we ask that your comments follow the .NET Foundation code of conduct.
i think the discussion has deviated from what i originally started, so i am closing this
@bigworld12, could you open an issue on dotnet/dotnet-api-docs
tracking better documentation around -0
with regards to floating-point.
We already call out +0
, -0
, PositiveInfinity
, NegativeInfinity
, and NaN
as special values (e.g. https://docs.microsoft.com/en-us/dotnet/api/system.single?view=netcore-3.0) but we could likely add a small blurb explaining some of the special semantics of these values.
@tannergooding Created issue: https://github.com/dotnet/dotnet-api-docs/issues/2031
@ericsampson I also think that doing the right thing is more important, but this is a sudden breaking change, so it needs to be explicitly stated which behaviors should change and which shouldn't. the -0 output can confuse alot of users who aren't used to the IEEE standard, so I think this needs to be better documented.
@bigworld12 thanks, I was just a little concerned by the direction that things were heading. We're on the same page! Documentation is always important. Cheers! ๐
Most helpful comment
Yes,
+0.0 == -0.0
, but that does not mean that they are behave the same when given as inputs to various operations. There are many operations (including simple ones like multiplication and division) where-0.0
is either returned or impacts the result of an operation. For example1.0 / -0.0
is defined to return-Infinity
.The IEEE 754 specification, which defines the binary-floating point formats (such as
binary32
andbinary64
, which the runtime and C# language specifications map toSystem.Single
andSystem.Double
, respectively) and their behaviors, specifies:So, we are aligning with the official standard which specifies that signs of zeros should be preserved both when converting to and from an external character sequence (i.e. a
string
).You would already have had some of these special cases as the language compilers (such as C#, VB, or F#; as well as non .NET languages such as C/C++, Python, Javascript, Java, or Rust) will correctly preserve the sign when parsing either
-0.0
or0.0
. Most other frameworks (and now .NET in netcoreapp3.0 and higher) would likewise have correctly preserved the sign when parsing these values using the built-in floating-point parsing functions (in the case of .NET these areSystem.Double.Parse
andSystem.Single.Parse
)These values can be useful in a number of scenarios and hiding the fact that you are dealing with a value that may change the behavior of your underlying algorithm makes it more difficult to debug or diagnose various issues. Attempting to normalize these values such that they are always +0.0 would be costly and would prevent them from being used where they do provide additional benefits.
.NET Core is willing to make breaking changes (when they make sense) in major versions. This is one of the cases where such a break makes sense as it aligns us with the IEEE 754 specification, aligns us with what other languages/frameworks are doing when handling these values, and allows us to resolve a number of bugs that have been filed due to the parsing/formatting behavior being non-compliant/differing from the behavior of other languages/frameworks that implement IEEE 754. We are trying to ensure that cases like this are made publicly visible and elaborated to users such as via: https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/