I'm finding the fastmath implementation of log() to actually be slower than the non-fastmath version.
julia> lottalogs(x) = log.(x)
julia> lottalogsfast(x) = @fastmath log.(x)
julia> zz = rand(100000);
julia> @btime lottalogs(zz);
497.300 μs (2 allocations: 781.33 KiB)
julia> @btime lottalogsfast(zz);
778.857 μs (2 allocations: 781.33 KiB)
This was done on Julia 1.4.2 on an AMD 3900X in Fedora Linux.
Same with cbrt
for me. The trend seems to be that the ones we have native Julia implementations of are faster than the @fastmath
ones that just calls into libm.
Ref https://github.com/JuliaLang/julia/issues/26434 ref https://github.com/JuliaLang/julia/pull/24031 .... The PR's that changes the non-fastmath implementation never changes the fastmath one..... And I was pretty sure there was a specific issue for this but maybe not....
Can we just get rid of @fastmath
?
(but not @fastmath)
This is a separate problem, but the non-fastmath version of cbrt
can cause performance issues with precompilation (#35972). Please be aware of the issue when benchmarking.
I agree with @simonbyrne suggestions. I have thought about this before and glad it has been brought up.
Not sure about in general. @fastmath abs(z)
on a complex number is still significantly faster, though maybe that could just be improved on the Julia side.
And I was pretty sure there was a specific issue for this but maybe not....
I think it's just been discussed many times on all the PRs for libm features, but I also thought there was an issue.
For log specifically, it would make sense if fastmath calls openlibm, but log calls the table based method.
Getting rid of @fastmath
might be good, not because of this issue, but because it's too loosely defined. People use it for all sort of different reasons (which all make code run "fast"). But before doing/evaluating that the replacement should go in first. e.g. https://github.com/JuliaLang/julia/pull/31862 and from the way people would like to use these this is certainly not trivial, and it'll likely always involve authors of the math function to do the right thing.
Is it time for taking fastmath seriously?
There are other issues: @fastmath
operates syntactically which can cause unexpected reults (#26828); --math-mode=fast
doesn't do syntactic rewriting, instead just enables the compiler optimizations, which leads to other bizarre behavior (#30073).
Both are giant footguns.
Where "taking fastmath seriously" means "deleting fastmath" 😁
Most helpful comment
Same with
cbrt
for me. The trend seems to be that the ones we have native Julia implementations of are faster than the@fastmath
ones that just calls into libm.