Julia: Reduction on array view is slow

Created on 23 Aug 2018  Â·  5Comments  Â·  Source: JuliaLang/julia

julia> using BenchmarkTools; const X = rand(1000, 1000);

julia> sum(X); @benchmark sum(X)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     355.773 μs (0.00% GC)
  median time:      406.994 μs (0.00% GC)
  mean time:        453.882 μs (0.00% GC)
  maximum time:     2.171 ms (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> sum(@view X[1:999, 1:999]); @benchmark sum(@view X[1:999, 1:999])
BenchmarkTools.Trial: 
  memory estimate:  64 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.782 ms (0.00% GC)
  median time:      1.795 ms (0.00% GC)
  mean time:        1.815 ms (0.00% GC)
  maximum time:     2.616 ms (0.00% GC)
  --------------
  samples:          2750
  evals/sample:     1

julia> versioninfo()
Julia Version 1.0.0
Commit 5d4eaca0c9 (2018-08-08 20:58 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.0 (ORCJIT, skylake)

Compared to python:

In [1]: import numpy as np; X = np.random.random([1000, 1000])

In [2]: %timeit np.sum(X)
377 µs ± 5.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [3]: %timeit np.sum(X[:999, :999])
752 µs ± 21.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

This issue holds for prod() as well.

arrays performance

Most helpful comment

I was just doing the same — but with the new Rebugger.jl. I highly recommend it.

We basically have two different implementations of mapreduce — a highly optimized one for arrays that support fast linear indexing, and a fallback one for all iterables. We could add one or two intermediate optimizations between those two book-ends:

  • An implementation for AbstractArray that assumes indexability.
  • And potentially an implementation for StridedArrays. I don't think there'd be extra to gain here, but it could be worth an examination.

All 5 comments

Oh that's interesting. Nice find, and thanks for the report. I believe the difference here is that Numpy pre-computes the derived strides, whereas we dynamically compute them — this general structure is what allows us to support all classes of indexing within view. That said, it might be possible for us to eke out a bit more performance in specific cases like this one.

(Edit: thanks, prof :) )

(s/eek/eke/)

One more interesting result:

julia> function mysum(X)
           s = zero(eltype(X))
           @simd for i = eachindex(X)
               @inbounds s += X[i]
           end
           s
       end
mysum (generic function with 1 method)

julia> mysum(@view X[1:999,1:999]) ≈ sum(@view X[1:999,1:999])
true

julia> @benchmark mysum(@view X[1:999,1:999])
BenchmarkTools.Trial: 
  memory estimate:  64 bytes
  allocs estimate:  1
  --------------
  minimum time:     358.658 μs (0.00% GC)
  median time:      397.377 μs (0.00% GC)
  mean time:        426.762 μs (0.00% GC)
  maximum time:     1.681 ms (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> function mysum_nosimd(X)
           s = zero(eltype(X))
           for i = eachindex(X)
               @inbounds s += X[i]
           end
           s
       end
mysum_nosimd (generic function with 1 method)

julia> @benchmark mysum_nosimd(@view X[1:999, 1:999])
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     1.531 ms (0.00% GC)
  median time:      1.543 ms (0.00% GC)
  mean time:        1.577 ms (0.00% GC)
  maximum time:     3.486 ms (0.00% GC)
  --------------
  samples:          3163
  evals/sample:     1

I dived into how sum() works, found that sum() calls mapreduce(), then becomes something like Base._mapreduce_dim(identity, +, NamedTuple(), view(X, 1:999, 1:999), :). If I had a debugger I'd be happy to continue digging, though.

I was just doing the same — but with the new Rebugger.jl. I highly recommend it.

We basically have two different implementations of mapreduce — a highly optimized one for arrays that support fast linear indexing, and a fallback one for all iterables. We could add one or two intermediate optimizations between those two book-ends:

  • An implementation for AbstractArray that assumes indexability.
  • And potentially an implementation for StridedArrays. I don't think there'd be extra to gain here, but it could be worth an examination.
Was this page helpful?
0 / 5 - 0 ratings

Related issues

dpsanders picture dpsanders  Â·  3Comments

manor picture manor  Â·  3Comments

ararslan picture ararslan  Â·  3Comments

felixrehren picture felixrehren  Â·  3Comments

omus picture omus  Â·  3Comments