Runtime: Make epsilon great again

Created on 5 Dec 2018  Â·  3Comments  Â·  Source: dotnet/runtime

Double.Epsilon and Single.Epsilon both have invalid values since .NET 1.0.

Details here: https://social.msdn.microsoft.com/Forums/azure/en-US/23c75283-c3c3-41bf-93d9-6b274593c4ed/singleepsilon-isnt

Both properties have a much lower value than they should have. So, let's make these epsilons great again.

I suppose add new constants with valid values:

Single.MachineEpsilon = 1.192092896e–07F;
Double.MachineEpsilon = 2.2204460492503131e–016
api-suggestion

Most helpful comment

I do not believe that this is worth fixing. A short summation is:

  • The IEEE 754 specification defines no such constant and the closest concept is not defined in terms of something that can be a constant
  • Defining this new constant will likely not solve most issues users may be encountering with floating-point comparison

Lets start by defining epsilon as the delta between a given representable number and the next representable number. This is used to determine the maximum difference between a given value and the infinitely precise result, regardless of the current rounding direction. This information is useful having because:

  • The epsilon between two representable numbers is not constant (it changes every power of 2)
  • IEEE operations are meant to compute the infinitely precise result and then round (using the current rounding direction) to the nearest representable result

To touch on the first point:

  • The epsilon between 0.0 and its next representable value is 4.9406564584124654E-324
  • ...
  • The epsilon between 0.25 and its next representable value is 5.5511151231257827E-17
  • The epsilon between 0.5 and its next representable value is 1.1102230246251565E-16
  • The epsilon between 1.0 and its next representable value is 2.2204460492503131E-16
  • The epsilon between 2.0 and its next representable value is 4.4408920985006262E-16
  • The epsilon between 4.0 and its next representable value is 8.8817841970012523E-16
  • ...

To touch on the second point, lets look at "0.1 + 0.2 == 0.3", which is 5 floating-point operations (and 4 times the result is rounded):

  • The first three operations happen in the compiler and involve parsing the literal strings:

    • The string "0.1" is parsed to the infinitely precise result "0.1", it is then rounded to "0.10000000000000001" which is the nearest representable result

    • The string "0.2" is parsed to the infinitely precise result "0.2", it is then rounded to "0.20000000000000001" which is the nearest representable result

    • The string "0.3" is parsed to the infinitely precise result "0.3", it is then rounded to "0.29999999999999999" which is the nearest representable result

  • The next operation is the addition of "0.1" and "0.2"

    • As we've already seen, the computed results of these strings are not exact

    • The two operations are added together to the infinitely precise result "0.30000000000000002", it is then rounded to "0.30000000000000004" which is the nearest representable result

  • The next operation is comparison of the previous result with the result of parsing "0.3"

    • The comparison fails because "0.30000000000000004" and "0.29999999999999999" are not equal

    • The bit representations of these two values are 3FD3333333333334 and 0x3FD3333333333333, respectively (which shows they differ by one bit)

  • As a note, the above "nearest representable results" are shown to 17 digits (which is the most required for roundtripping between a string and an exact representable double floating-point value). The exact value represented may consist of more digits

Defining a new constant that matches the C/C++ defined epsilon will not fix issues when comparing the delta between results less than 0.5 or greater than 2.0; just as the current epsilon does not work well when comparison results greater than the largest subnormal. Instead, users should look at using things like the newly exposed Math.BitIncrement and Math.BitDecrement (which equate to the IEEE nextUp and nextDown operations and the C/C++ nextToward function). They should also consider checking whether the actual result and the expected result are within an acceptable tolerance of each-other (for which the minimum tolerance is 1-bit and the maximum tolerance is user-defined, depending on a number of factors).

All 3 comments

cc @tannergooding

I do not believe that this is worth fixing. A short summation is:

  • The IEEE 754 specification defines no such constant and the closest concept is not defined in terms of something that can be a constant
  • Defining this new constant will likely not solve most issues users may be encountering with floating-point comparison

Lets start by defining epsilon as the delta between a given representable number and the next representable number. This is used to determine the maximum difference between a given value and the infinitely precise result, regardless of the current rounding direction. This information is useful having because:

  • The epsilon between two representable numbers is not constant (it changes every power of 2)
  • IEEE operations are meant to compute the infinitely precise result and then round (using the current rounding direction) to the nearest representable result

To touch on the first point:

  • The epsilon between 0.0 and its next representable value is 4.9406564584124654E-324
  • ...
  • The epsilon between 0.25 and its next representable value is 5.5511151231257827E-17
  • The epsilon between 0.5 and its next representable value is 1.1102230246251565E-16
  • The epsilon between 1.0 and its next representable value is 2.2204460492503131E-16
  • The epsilon between 2.0 and its next representable value is 4.4408920985006262E-16
  • The epsilon between 4.0 and its next representable value is 8.8817841970012523E-16
  • ...

To touch on the second point, lets look at "0.1 + 0.2 == 0.3", which is 5 floating-point operations (and 4 times the result is rounded):

  • The first three operations happen in the compiler and involve parsing the literal strings:

    • The string "0.1" is parsed to the infinitely precise result "0.1", it is then rounded to "0.10000000000000001" which is the nearest representable result

    • The string "0.2" is parsed to the infinitely precise result "0.2", it is then rounded to "0.20000000000000001" which is the nearest representable result

    • The string "0.3" is parsed to the infinitely precise result "0.3", it is then rounded to "0.29999999999999999" which is the nearest representable result

  • The next operation is the addition of "0.1" and "0.2"

    • As we've already seen, the computed results of these strings are not exact

    • The two operations are added together to the infinitely precise result "0.30000000000000002", it is then rounded to "0.30000000000000004" which is the nearest representable result

  • The next operation is comparison of the previous result with the result of parsing "0.3"

    • The comparison fails because "0.30000000000000004" and "0.29999999999999999" are not equal

    • The bit representations of these two values are 3FD3333333333334 and 0x3FD3333333333333, respectively (which shows they differ by one bit)

  • As a note, the above "nearest representable results" are shown to 17 digits (which is the most required for roundtripping between a string and an exact representable double floating-point value). The exact value represented may consist of more digits

Defining a new constant that matches the C/C++ defined epsilon will not fix issues when comparing the delta between results less than 0.5 or greater than 2.0; just as the current epsilon does not work well when comparison results greater than the largest subnormal. Instead, users should look at using things like the newly exposed Math.BitIncrement and Math.BitDecrement (which equate to the IEEE nextUp and nextDown operations and the C/C++ nextToward function). They should also consider checking whether the actual result and the expected result are within an acceptable tolerance of each-other (for which the minimum tolerance is 1-bit and the maximum tolerance is user-defined, depending on a number of factors).

@tannergooding Thanks for the detailed answer. As I can see both Math.BitIncrement and Math.BitDecrement methods let to do the same things as machine epsilon.

With regard to the usage of epsilon.
It can be used to sum a convergent series with the maximum available precision.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

v0l picture v0l  Â·  3Comments

Timovzl picture Timovzl  Â·  3Comments

iCodeWebApps picture iCodeWebApps  Â·  3Comments

aggieben picture aggieben  Â·  3Comments

nalywa picture nalywa  Â·  3Comments