We're not going to make complex numbers projective by default, but https://github.com/JuliaLang/julia/issues/9790 does raise legitimate concerns about behavioral differences between division by real and complex zeros, e.g.:
julia> for T1=[Int, Float64, Complex{Int}, Complex128], T2=[Int, Float64, Complex{Int}, Complex128]
println((T1, T2), " => ", T1(1)/T2(0))
end
(Int64, Int64) => Inf
(Int64, Float64) => Inf
(Int64, Complex{Int64}) => NaN + NaN*im
(Int64, Complex{Float64}) => NaN + NaN*im
(Float64, Int64) => Inf
(Float64, Float64) => Inf
(Float64, Complex{Int64}) => NaN + NaN*im
(Float64, Complex{Float64}) => NaN + NaN*im
(Complex{Int64}, Int64) => Inf + NaN*im
(Complex{Int64}, Float64) => Inf + NaN*im
(Complex{Int64}, Complex{Int64}) => NaN + NaN*im
(Complex{Int64}, Complex{Float64}) => NaN + NaN*im
(Complex{Float64}, Int64) => Inf + NaN*im
(Complex{Float64}, Float64) => Inf + NaN*im
(Complex{Float64}, Complex{Int64}) => NaN + NaN*im
(Complex{Float64}, Complex{Float64}) => NaN + NaN*im
More generally, the question is about the menagerie of complex NaNs.
So, right now, if you divide anything by a complex zero, you get NaN + NaN*im
, and if you divide a complex 1 by a real zero, you get Inf + NaN*im
. Seems pretty reasonable and consistent to me. What is the problem?
Seems like the real part of the value you get shouldn't depend on the type of the denominator – if you have code that does real(x/y)
getting Inf
or NaN
are pretty different results. This is especially true since dividing the components of a complex each by a real denominator is, in some sense, just an optimized shortcut for doing promotion first and then complex division, seems like it should give the same results as that would.
For the record, in NumPy real(1) / complex(0) matches complex(1) / complex(0):
>>> import numpy
>>> numpy.float64(1) / numpy.complex128(0)
(inf+nan*j)
>>> numpy.complex128(1) / numpy.complex128(0)
(inf+nan*j)
The reason that 1.0 / +0.0
gives Inf
in IEEE is that you view it as essentially the limit of 1/x
as x → 0⁺
(0 from above).
However, 1.0 / (+0.0 + 0.0im)
is ambiguous: in what direction in the complex plane do you take the limit (even if you restrict yourself to the positive quadrant)? Returning Inf
or Inf+0im
would mean choosing that the 0.0im
is "really" zero whereas the real part is a limit — why would you make this choice? Returning Inf + NaN*im
would imply that you are treating 1.0
as "really" the limit of 1.0 + x*im
as x → 0
, and you get an NaN from the ambiguity between the limits of the numerator and denominator.
Our current behavior, in my mind, is much more sensible:
x::Real
, then imag(x)
is treated as "really" zero, not a limit.0.0 + 0.0im
, then it is treated (for purposes of defining most arithmetic ops) as x + y*im
as x → 0⁺
and y → 0⁺
, but the exact direction in the complex plane (the relationship between x
and y
) is undefined.(The exception to all of this is 0.0^0.0
and similar, which would be undefined/NaN if treated as a limit, but which IEEE/ISO and longstanding convention in computer arithmetic defines to be 1.)
That being said, gcc -std=c99
apparently defines this as Inf + Nan*im
:
#include <complex.h>
#include <stdio.h>
int main(void)
{
double complex z = 0.0 + 0.0*I;
double complex w1 = 1.0 / z;
double complex w2 = (1.0 + 0.0*I) / z;
printf("%g+%gi\n", creal(w1), cimag(w1));
printf("%g+%gi\n", creal(w2), cimag(w2));
return 0;
}
prints:
inf+nani
1/(0 + 0im) == Inf + NaN*im
only makes sense to me if you lack a specialized real/complex
method, i.e. you define this operation by first promoting both arguments to Complex{Float64}
and thereby "lose" the fact that imag(1)
is "really" zero, and even then it involves picking a direction for the limit.
ISO/IEC 10967-3:2006(E), "Complex integer and floating point arithmetic and complex elementary numerical functions", section 5.2.6.3 "Complex division", apparently defines only complex division between two complex floating-point values, and says nothing about complex division with a real floating-point numerator:
Note, however, that the last definition apparently mandates that (1.0 + 0.0im) / (0.0 + 0.0im)
produce ((1.0*0.0+0.0*0.0) + (0.0*0.0-1.0*0.0)*im) / (0.0*0.0 + 0.0*0.0)
, which is NaN+NaN*im
, consistent with what Julia currently does (but inconsistent with Python or gcc!).
Aha, section 5.2.5.5 "Fundamental complex floating point arithmetic" of the ISO standard does define division for mixed real/complex arguments:
Note that
(x+y*im) / z
is defined as (x/z) + (y/z)*im
, which means (1.0 + 0.0im) / 0.0
should give Inf + NaN*im
, consistent with our current behavior.x / (z * w*im)
is defined as (x + imF(x)*im) / (z + w*im)
, where imF(x)
is defined earlier as copysign(-x, 0.0)
(note sign!). And as discussed in my previous post, however, the standard appears to mandate that 1.0 / (0.0 + 0.0im) = (1.0 - 0.0im) / (0.0 + 0.0im)
yield NaN + NaN*im
, consistent with our current behavior.So, my conclusion is that our current behavior seems to follow the ISO standard. (It's mysterious to me that gcc and Python do not, however!)
Is there a short answer to why it is picky about the sign of the 0.0im
in the numerator?
@tkelman, 1.0 / (1.0 + 0.0im)
should give 1.0 - 0.0im
, which matches (1.0 - 0.0im) / (1.0 + 0.0im)
, but (1.0 + 0.0im) / (1.0 + 0.0im)
gives 1.0 + 0.0im
.
1.0 / (1.0 + 0.0im)
should give1.0 - 0.0im
why is that? I'm missing why the sign of the imaginary component of that should be negative
@tkelman, because the imaginary part of 1/(1 + x*im)
is always the opposite of the sign of x
, since 1/(1+x*im) = (1 - x*im) / (1 + x^2)
. Now take the limit as x ⟶ 0⁺
, and you'll see that the answer should be 1 + 0⁻ * im
, or Complex(1.0, -0.0)
.
but inconsistent with Python or gcc
Also Clang gives me the same result as GCC for your C program. And both GCC/Clang and NumPy gives 1.0 + 0.0im
for 1 / (1.0 + 0.0im)
. Actually, I'm not able to obtain 1.0 - 0.0im
with NumPy:
>>> import numpy as np
>>> np.complex128(1.0 - 0.0j)
(1+0j)
>>> np.imag(np.complex128(1.0 - 0.0j))
array(0.0)
The question now seems to be: adhere to mathematics (and the standards) or follow the mass? I'd say the former.
Whatever will be the outcome of this discussion, it's probably a good idea to test these corner cases, to be sure they won't change without notice.
@giordano, to get 1.0 - 0.0im
in Python, you have to use the two-argument constructor complex(1,-0.0)
or numpy.complex128(complex(1,-0.0))
.
R also gives:
> 1/(0+0i)
[1] Inf+NaNi
Matlab is even worse, naturally:
>> 1 / (0+0i)
ans =
Inf
and GNU Octave does the same thing (with a warning: division by zero
).
In gfortran, they had a discussion about this and eventually settled on always returning NaN + NaN*im
, apparently:
Closing as won't fix. Currently, gfortran produces nan + i nan for
finite-complex / 0 or finite-complex / (0,0) with its FE constanting
folding and when the middle-end generates code. The Fortran standard
does not provide any guidance with respect to exceptional FP values,
and trying to get the same output of an equivalent C program is not
required by either standard.
I'm still surprised that gcc doesn't follow the ISO standard here. Am I misreading it? It seems very clear to me.
(I also tried g++
with std::complex<double>
, but unsurprisingly it is the same as gcc
. Would be interesting if someone could try the Intel or Microsoft compilers.)
I opened PR #23013 to add tests for the current behavior.
It seems like we should either do nothing to continue following the standard, or change complex/real
to match the result of complex/complex(real)
.
My suspicion is that the IEEE standard defined complex/real
this way to make it just a bit more efficient, not for correctness, but once you're in division by zero territory, there are no "correct" options.
+1 for doing nothing and continuing to follow the standard.
Given that floating-point numbers include ±Inf, ±0, and NaN, some options for the result of division by zero are more sensible than others.
My feeling is that the ISO standard's choice is very reasonable, arguably more reasonable than Python or gcc. As I explained above, you get that behavior if you view ±0.0
as a limit (for the purposes of defining operations), with the limits on the real and imaginary parts of 0.0+0.0im
taken independently, but treat imag(::Real)
as "really" zero.
That was the feeling on the triage call as well, so let's close this.
Would be interesting if someone could try the Intel or Microsoft compilers.
Sorry for necroposting this, but I just add the occasion to try the Intel compiler:
complex.c
:
#include <complex.h>
#include <stdio.h>
int main(void)
{
double complex z = 0.0 + 0.0*I;
double complex w1 = 1.0 / z;
double complex w2 = (1.0 + 0.0*I) / z;
printf("(%g,%g)\n", creal(w1), cimag(w1));
printf("(%g,%g)\n", creal(w2), cimag(w2));
return 0;
}
$ icc --version
icc (ICC) 18.0.3 20180410
Copyright (C) 1985-2018 Intel Corporation. All rights reserved.
$ icc -std=c99 complex.c && ./a.out
(inf,-nan)
(inf,-nan)
complex.cpp
:
#include <complex>
#include <iostream>
int main(void)
{
std::complex<double> z(0.0, 0.0);
std::complex<double> w1 = 1.0 / z;
std::complex<double> w2 = std::complex<double>(1.0, 0.0) / z;
std::cout << w1 << std::endl;
std::cout << w2 << std::endl;
return 0;
}
$ icc complex.cpp && ./a.out
(inf,-nan)
(inf,-nan)
In addition, Mathematica gives ComplexInfinity
in these cases
Most helpful comment
+1 for doing nothing and continuing to follow the standard.
Given that floating-point numbers include ±Inf, ±0, and NaN, some options for the result of division by zero are more sensible than others.
My feeling is that the ISO standard's choice is very reasonable, arguably more reasonable than Python or gcc. As I explained above, you get that behavior if you view
±0.0
as a limit (for the purposes of defining operations), with the limits on the real and imaginary parts of0.0+0.0im
taken independently, but treatimag(::Real)
as "really" zero.