julia> log10(1000), log(10, 1000)
(3.0, 2.9999999999999996)
fails for 110^{13, 6, 9, 12, 13, 15, ..}
The difference is only one eps(3.0)
I guess it is a surprise if you think one is just a different name for the other, but log10 is a specialized function right?
log10
calls to libm. log(10, x)
is just log(x) / log(10)
it is less a capability than an unchecked error to have two functions appear to share what they do not
It would be better to have log(10, x) = log10(x)
and log(2, x) = log2(x)
.
What is done now will cost someone and their firm a great deal of money, if it persists into v1.
Do you also think this is a problem:
julia> sin(Ï€)
1.2246467991473532e-16
julia> sinpi(1)
0.0
@JeffreySarnoff Sorry, but if someone's business model or profit depends on floating point epsilons, he/she shouldn't be in business anyway...
I think the existence of log10(x)
suggests that log(10,x)
is somehow lacking, though it would be nice if this were documented.
Alternatively, we could special case 10
and 2
(and possibly powers thereof).
@lobingera Somehow excellence in programming is the exception to normally distributed talent?
I think it should be well understood (and could be better documented) that different math functions are suitable/optimized for different cases. It's possible to special case log(2, n)
and log(10, n)
but the question is where do we stop? There are tones of other cases where mathematically equivalent expressions can give results that are different by a few eps
, e.g.
julia> sinpi(1 / 8) - sqrt(2 - sqrt(2)) / 2
5.551115123125783e-17
and it's basically impossible to special case all of them. Given that the right function that optimize for this case and does the right thing exists, the user should just use that directly.
@JeffreySarnoff No, i mean you should know or be aware of the limitations of the system you are using. FP is a nice approximation and as we most of the time have physical limitations to measure something, so this is matching in most of the calculations we do. If you need exact math, use another system. Numerical problems exist and experts should solve them, not the average programmer.
@simonbyrne that is what I had been thinking, too ... special case 10
and 2
and some powers.
@yuyichao My concern is not about the right and careful use of Julia's functions, nor about the slippery slope of unending similars. Lots of kinds of experts write software to support their work -- once upon a time Visicalc ruled the roost, and HR managers would spend weekends working on three inscrutable macros which later permeated most computers in their departments, and in departments that had people who wanted to report their hours the 'new' way. This specific illusory, and 'intermittent-by-value' disconsonance of quantity by log_10 is precisely the sort of functionality that many in the financial world rely upon. The fact that one is just 1ulp away from the other, is no safeguard -- because people ... intelligent, successful professionals ... will create indirect paths that choke when it should happen that log(10,1000) < 3.0 .. because they will have become familiar with log10(1e+SmallInt) being entirely reliable. There will be checks rather than equality tests, and still somehow if x < 3.0
appears.
I see this as an opportunity for Julia to do right by those future users who bring cash into Julia's ecosystem with an expectation of smooth sailing.
If we special case 2 and 10, we'll just be waiting for the next bug report about log(10, 1000000) / 2
, log(100, 1000000)
, log10(1000000) / 2
, log(10, 1000)
, log10(1000)
don't all agree with each other until we special case all the powers of 2 and 10.
There are not that many powers 10^Int64 that fit an Int64
Well, the first argument accepts a floating point number so you are talking about a few hundred of special cases.
I agree that it's a slippery slope: do we handle cases like 1e30
, which isn't actually equal to 10^30
?
For now, lets just add a note to the log
documentation noting that the alternatives for bases 2 and 10 are more accurate (and possibly faster as well).
Really, all I am talking about is Julia's approach to the smoothing of a sharp edge. The real world values that are problematic are 10^{3, 6, 9, 12, 13, 15, 18} and if those were special cased, the commodities traders would sleep better. It is not about covering everything on this, just those. And the attendant irregularity would be more regular than its absence.
The real world values that are problematic are 10^{3, 6, 9, 12, 13, 15, 18} and if those were special cased
I don't see why those values are special, they just happens to be the power of 10s that fits in a Int64 for which log(10, n)
is not that accurate for the particular libm implementation and hardware you are using.
the commodities traders would sleep better.
They'll sleep much better (and also make others sleep better) if they just use the right function.
I see your point, but the additional complexity would be difficult to handle without a large performance hit.
In the longer term, it might be possible to fix this in a more comprehensive manner, computing log(b,x)
, where x = 2^k*(1+f)
as
k/log2(b) + log1p(f)/log(b)
where the computations are all performed using sufficient extended precision. But a lot of work and validity checking would need to be done to get this right.
It seems reasonable for me to have log(b, x)
to call logb(x)
for the special cases of b==2
and b==10
. This is the usual practice with special functions: for particular cases of the arguments, you often call optimized implementations.
(This seems no different to me from polygamma(m,x)
calling digamma
for m=1
and trigamma
for m=2
.)
Quite apart from accuracy considerations, in a quick benchmark it seems that the specialized log2
and log10
functions are 50–70% faster than log
for those bases.
The question is should we support log(10^k,x)
as well, and if so, how to make it performant.
Hmm, I just tried mylog(b,x) = b == 2 ? log2(x) : b == 10 ? log10(x) : log(b, x)
, and unfortunately the cost of the branches leads to a 30% slowdown when b
is not 2 or 10. i.e. it seems that log
is too cheap for this optimization to be worthwhile.
There are a host of such micro-optimizations that could make sense. For example:
log1p
?log
so that the issue here is moot?log
is the constant 2
or 10
, should it automatically perform the replacement? (It probably already does so for pow
.)log2
is faster than the generic two-argument log
, and will use it where possible, so that such an if
statement will only trigger very rarely at run time, and will in the end increase run time.@eschnett, log1p
is not an optimization, it's a critical function for accuracy in important cases, no different from erfc
or expm1
— without it, you can easily lose _everything_ to underflow, and there is no good alternative. Long experience has shown that this special function is useful and important.
As I said above, a runtime check in log
for b==2
or b==10
unfortunately seems to be a net loss.
An option would be for the parser to replace log(10, x)
, and log(2, x)
, Of course, that wouldn't take care of cases like log(10^k, x)
, log(100, x)
, log(4, x)
or such.
log1p
is not an optimization
I assume he meant using log1p10
for special cases?
Parser shouldn't be involved here.
The parser can't do it since it doesn't know the binding of log
. e.g. it is perfectly valid for the user to have
function myfunction(y)
log(x,y) = atan2(x,y)
return log(2,y)
end
Oh right. I forgot about that.
Regarding log1p
(I know why it is necessary): If we introduce a special case for log(10, x)
, then why not for log(x)
, in cases where x
is near 1
? This would server the same purpose -- it allows people to call the generic log
function without being aware of floating-point round-off, and still getting good results.
Regarding automatically transforming log(10, x)
to log10(x)
: This needs to be implemented as optimization pass, not in the parser. You would want to run this after inlining and constant folding. This might already be an LLVM pass; and if not, could easily be introduced. I'm quite certain that e.g. pow(x, 2.0)
is replaced by powi(x, 2)
, and the latter might be replaced by x*x
.
Is the log(base,x)
method really necessary to define in Base? I'm fine with log2
and log10
but the general base version seems like a very simple function that is rarely useful.
then why not for
log(x)
, in cases wherex
is near1
Because that won't help, since the rounding error in log(1 + x)
comes from 1 + x
, not log
.
Regarding automatically transforming
log(10, x)
tolog10(x)
: This needs to be implemented as optimization pass, not in the parser.
It's not an optimization so it shouldn't be done.
Is the
log(base, x)
method really necessary to define inBase
?
If there's a better way to implement it it's useful to keep it in Base
If there's a better way to implement it it's useful to keep it in
Base
Having a non-trivial implementation is a reason to keep it but is the function really that useful? How often do people need to compute logarithms with bases that are not 2
, e
, or 10
?
As often as people who use ^
with a non-2/e/10 base and non-const power.
As often as people who use ^ with a non-2/e/10 base and non-const power.
I think most people have been taught to solve a^x=b
by computing x=log(b)/log(a)
and not by using the base a
logarithm so, empirically, I don't think the base
logarithm method is used that much. I also think that computing a^b
is a much more frequent computation than solving a^x=b
but that is not the point here.
@yuyichao I'm not really advocating it, but replacing log(1+x)
for small x
with log1p(x)
would solve the accuracy problem.
Also, regarding replacing log(10, x)
by log10(x)
: I thought @stevengj measured that log10
was faster? Then it does count as optimization.
The problem is then that
b = 10
log(b, x)
and
log(10, x)
can give different answers.
As @simonbyrne said. Replacing log(10, x)
by log10(x)
consistently (i.e. as long as the first argument to log
is an acceptable optimization for this case if it doesn't cause performance issue for other cases). Replacing it only when the compiler can prove that the first argument is 10
is not an optimization that can be on by default.
@simonbyrne Yes; that's the same case as for
n = 2.0
x ^ n
and
x ^ 2.0
The former might be lowered to LLVM's pow
intrinsic, while the latter is likely to be optimized to a powi
intrinsic, or even a straight multiplication. That is much faster, and potentially more accurate -- i.e. answers might differ.
But Julia intentionally doesn't do this, and this would be quite a significant change if it were to (this is similar to the proposal to make decimal literals a non-float type).
The only exception I can think of is floating point division using @fastmath
:
julia> 7.0/3.0
2.3333333333333335
julia> foo(x) = @fastmath x/3.0; foo(7.0)
2.333333333333333
It does do it for integer exponents:
foo(x) = x^2
But typically a pow
function would also check for the exponent to be 2 to reduce to a multiplication, so this should give the same answer.
^
for integer uses a different definition so it always give the same result no matter how's the argument is provided. This is valid in the same sense that log(n, m) = n == 10 ? log10(m) : log(m) / log(n)
is a valid implementation.
@fastmath
could allow the log(2,x) -> log2(x)
transformation. But I'm not sure I see the urgency; anyone willing to explicitly type @fastmath
could also type log2
in performance-critical code.
"log10 is a specialized function right?" Yes, for log, log2, and log10, for machine floats (only) calls libm, see:
@edit log10(1000)
and:
log{T<:Number}(b::T, x::T) = log(x)/log(b)
log(b::Number, x::Number) = log(promote(b,x)...)
I checked what competing Python does (same results for 2.7.12 and 3.5.2), and it's at least not better, with exact same results (order of parameters is different though.. not helping people migrate..):
math.log(1000, 10)
2.9999999999999996math.log10(1000)
3.0
I did see another inconsistency..:
math.log(1000)
6.907755278982137np.log(1000) #NumPy did not work in 3.5.2, so didn't check..
6.9077552789821368
?log: "There is an experimental variant in the Base.Math.JuliaLibm module, which is typically faster and more accurate." didn't make a difference (numerical, didn't check speed):
@edit Base.Math.JuliaLibm.log(1000)
"anyone willing to explicitly type @fastmath could also type log2 in performance-critical code."
would people do the former out of habit..? Even on whole functions or more?
Closed by https://github.com/JuliaLang/julia/pull/18360 ?
Most helpful comment
I agree that it's a slippery slope: do we handle cases like
1e30
, which isn't actually equal to10^30
?For now, lets just add a note to the
log
documentation noting that the alternatives for bases 2 and 10 are more accurate (and possibly faster as well).