Back when we started the julia project, there were enough libm libraries in the wild that did strange things and having our own openlibm made sense. It was a fork from freebsd's msun, which has been patched heavily and works well for us now. The downside is that there are years of accumulated patches in the original msun that we are missing, and it is not straightforward to port them.
There is the larger issue of whether we should continue to have openlibm, since most libms have greatly improved on that front, and issues that we discover can be fixed by having julia versions now.
@tkelman also pointed me to sleef that may allow for better vectorization and may potentially become part of llvm, which may be a good direction for us to go in. That whole thread is worth reading.
Would love to hear thoughts from others on what to do with openlibm maintenance going forward:
Edit: Updated the options above to reflect the discussion. (VBS)
also just fwiw, the libm on Windows hasn't changed, so it's the one platform where this is still a major improvement over the system version. linking with openlibm helped get some of the blas / lapack testsuite to pass back in the day.
I would have expected them to have fixed their libm in the last 5 years. Well, so just means that we have to do something - but perhaps it can be sleef rather than openlibm.
For sake of "Julia is built on julia", it would be nice if libm
was coded in julia.
Multiple Dispatch is nice for writing libm
since it is easy to special case on types.
Eg as in https://github.com/JuliaLang/julia/blob/master/base/special/log.jl
and my own very minor contribution to exp2
On the other hand, that does mean maintaining own libm still.
But a lot more julia programmers would contribute towards a libm in julia, than to one in C/Fortran.
And being in Julia it should gen really nice llvm code, no?
There's also musl libc's libm implementation which is largely derived from the same original freebsd sources but with a different set of improvements: http://git.musl-libc.org/cgit/musl/tree/src/math
Why isn't this issue in the openlibm repository?
I guess the point was the future of openlibm in julia - as in might we have something different, more than openlibm itself.
One advantage of having libm functions coded in Julia (this doesn't have to be a all-or-nothing step) is that code generation choices such as @fastmath
or @simd
can easily affect these functions as well. This can have a large (positive) performance impact. Plus, a Julia implementation is probably smaller and easier to understand than a C implementation.
There are definite benefits to having this implemented in Julia. The sleef approach is perhaps the one to follow for maximum simd benefits - but the code is not that pretty to look at. I wonder how we might fare.
With multiple dispatch and multiple return values, Sleef's code size should shrink by one half. Add generic programming, and the code size will reduce further. I have great hopes.
It does seem to have lots of code to handle different sizes of vector registers. I don't know how our autovectorization will stack - but there is only one way to find out.
For sake of "Julia is built on julia", it would be nice if libm was coded in julia.
In that spirit, I've created:
https://github.com/JuliaMath/Libm.jl
Contributions welcome.
Once we're happy with it, we can look at merging it into Base.
sleef doesn't appear to have a license.
Sleef is in the public domain: "The software is in public domain. You can use the software without any obligation." https://github.com/shibatch/sleef.
On some looking for more info on SLEEF:
a month or so ago, there was talk to adding SLEEF to LLVM
http://lists.llvm.org/pipermail/llvm-dev/2016-July/102254.html
If that were done, then I guess we would just use that?
Though right now we don't use too many of the existing LLVM math functions, see #12830.
(If i understand things rightly; which I may not)
edit: Oh I see this was mentioned in the OP. Teach me not for reading closely.
In anycase, question remains as to if SLEEF was put into LLVM, would that in and of itself, just decide this for us?
@ViralBShah: how much effort is maintaining openlibm these days? My impression is that it's not all that much work. What was the motivation for opening this issue?
openlibm is years out-of-date w.r.t. bugfixes and improvements in the upstream sources: https://github.com/JuliaLang/openlibm/issues/101, https://github.com/JuliaLang/openlibm/pull/118
Also upstream and us have diverged a fair bit. While this work is doable, it is tedious and I have tried and given up on two occasions.
I have ported some functions from musl at (see erf/erc at https://github.com/JuliaMath/Libm.jl/pull/3) and also the pending exp (wip) port. Based on my tests these ports are essentially within testing noise in terms of speed (some cases faster some cases slower) the erf function is just as fast if not faster, while erfc is a little more variable but is usually not slower than 1.5 times the versions in base.
My question: is this exercise worth pursuing to its fullest (julia ports of libm math functions), i.e. is there a possibility that these will be used in base in the near future or is this just an academic exercise.
Looking at the musl logs, the math functions get bug fixes ever so rarely (maybe once every 6 months).
My preference is that we should seriously look into porting something like sleef. The approach there should yield code that lends itself to auto-vectorization.
I personally would push hard for a pure julia libm, as it will help push the compiler, and also give us code that many can hack on.
There's the potential in the future to compile a pure Julia libm to .so form that can be called from C and other languages. In that case, Julia could just be considered a very terse code generator.
the math functions get bug fixes ever so rarely (maybe once every 6 months).
Even then, we should make sure we have a mechanism for regularly flagging and incorporating that kind of change, until Stefan's scenario becomes a reality and Julia-defined versions of these become the upstream for other users.
While we need a better way to sync with whatever upstream we end up using, the bigger issue is lack of sufficient tests for most libm implementations (except perhaps the stuff crlibm built). I really wish we had adequate easily accessible testing for each of these functions and all the corner cases.
Fortunately, that's significantly easier to build out in Julia than in C – we have access to high precision libraries (BigFloat, DoubleDouble, ArbFloat) and things like ValidatedNumerics.
Ok, I'll continue to port more function (and I'll try sleef) since it looks like there is agreement that a pure Julia math library is a profitable option forward.
Is source to source conversion a viable option in the near future such that what I'm doing now is wasted effort ? Tbh I'm not familiar with how long such an infrastructure would take to build.
I don't think an automatic translator is likely to be forthcoming in the immediate future (though I would love to be proven wrong), so I don't see any problem with manual translations. - simonbyrne
I think it is non-trivial to get good source-to-source conversion, and the benefits of being Julian will be lost. I think translation + tests + keeping an eye on the upstream is perhaps the right way to go.
It's worth pointing out that there is a bunch of discussion at Libm.jl:
https://github.com/JuliaMath/Libm.jl/issues
Best to leave this issue open, only to close it at the right time when we have a real replacement. In the meanwhile, let's redirect discussion to Libm.jl. Otherwise, we are repeating the same things in both places.
So I bound SLEEF to ccalls
This is what it looks like: https://github.com/oxinabox/SLEEF.jl
Happy to take issues and PRs over there.
(this is complementary to Libm.jl's pure julia SLEEF)
It seems like we've decided to go ahead with the Julia libm project. Should we close this now?
Now that #20427 is merged, which (almost?) removes openspecfun.so.
It seems like this is pertinent.
I've partially lost track of where things are at.
My rough understanding was that work started with https://github.com/JuliaMath/Libm.jl/
Some porting of a few functions form various libm's was done,
and that repo today is still in that state. (after some fiddly rebasing and cherry picking)
then @musm basically solo'd a complete rewrite of Sleef in pure julia.
and it was much more performant than my ccall'd wrapper, even though it has no handcoded SIMD :-D
And that is currently in musm/sleef.jl
(It might be missing Float32 ops?)
@musm was unhappy with the accuracy of SLEEF,
so started again with a new library musm/AMAL.jl
Which is not yet complete?
So now it should be roughly possible to swap-out openlibm.so for musm/sleef.jl,
and in the future for the better musm/AMAL.jl
After some boilerplate to link it all in?
Have I got that right @musm , @simonbyrne ?
I would also be interested in a summary of what's going on with various libm-like efforts.
Brief conclusions:
I started with porting SLEEF, which took some time. Discovered bugs and accuracy problems during the process, after reporting upstream (which only recently got fixed in newer version of SLEEF after pinging the author; Sleef.jl has not been updated with these fixes...yet).
Problem was that SLEEF is just about as accurate as msum, but not as fast on some important functions vs msum. SLEEF uses double precision arithmetic tricks to try and squeeze extra precision, which is fine but sacrifices perf.
Then I started working on Amal, I spent a ton of time trying to develop better functions (not gonna mention how, but the process is very tedious in general). While it's possible to develop fast function it's really hard to develop fast and accurate functions (< 1 ulp). My conclusion is that the Amal algorithms should be inspired by the msum versions.
I didn't see any real benefits in terms of the simd auto-vectorization of SLEEF vs the msum lib on scalar arguments (which also benefits from auto-vectorization). Even on vector arguments there is no benefit unless one hand tunes the SLEEF code to maximize perf with registers and all that, which is not done in SLEEF anyways.
Only real case I see where SLEEF would actually be better, I believe is for GPU code. In other cases the SLEEF code is actually really bad; for example, handling of exceptions really makes no sense in SLEEF (e.g. it handles out of domain errors after computing a wrong result... well it's done to reduce branching but branching is not that expensive on CPUS.
Sleef.jl got split from Libm.jl since it's a pure SLEEF port
Amal.jl also got split since it also represented a new, from scratch, repo with ideas on what should be the algorithms and motivating principles. I don't know how intel and amd do their math library since it's not open source.
I can submit a PR for an updated exp10 function (I think it's something like 100 times faster) if people want, it blows the water out of the current function version, which just calls the pow
function, but is slightly less accurate (~1.15 ulps on my test range).
A pure Julia exp10 sounds like a good start (since at the moment we just call 10^x
, which can be quite slow).
+1 to getting a faster and slightly less accurate exp10.
cross-ref: #21445 ("Add pure julia exp10 function").
The solution to this is the pure Julia libm which is mostly done, and the work is now being tracked in #26434.
Most helpful comment
The solution to this is the pure Julia libm which is mostly done, and the work is now being tracked in #26434.