There's been some discussion by @ChrisRackauckas on incorporating BandedMatrices.jl into StdLib so more operations can take advantage of such structures, see discussion in
https://discourse.julialang.org/t/sparse-solve-vs-bandedmatrix-time-and-allocation-surprise/45119/22
https://github.com/JuliaMatrices/BandedMatrices.jl/issues/197#issuecomment-680046851
This issue is to open up the discussion on what this would look like. Some options from most lightweight:
colsupport(A,j)
and rowsupport(A,k)
to LinearAlgebra.jl to give the non-zero rows/columns of a matrix. For example:julia> A = brand(7,6,2,1)
7ร6 BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
0.984773 0.571096 โ
โ
โ
โ
0.957023 0.262126 0.790449 โ
โ
โ
0.236589 0.65944 0.90871 0.15923 โ
โ
โ
0.697749 0.0152713 0.504984 0.987989 โ
โ
โ
0.775639 0.65915 0.12292 0.953143
โ
โ
โ
0.429865 0.178221 0.282828
โ
โ
โ
โ
0.0896528 0.553253
julia> colsupport(A,4) # non-zero entries in 4th column
3:6
julia> rowsupport(A, 4:5) # non-zero entries in 4th through 5th rows
2:6
This can be immediately incorporated into the generic linear algebra algorithms to give optimal complexity algorithms for banded matrices (and actually this is already done in ArrayLayouts.jl, see for example default_blasmul!
, and has the nice benefit of supporting InfiniteArrays.jl.
AbstractBandedMatrix
and interface (bandwidths
, band
, ...). Then Diagonal
, Tridiagonal
, Bidiagonal
.BandedMatrix
and companions [Symmetric{<:Any,<:BandedMatrix}]
, [Adjoint{<:Any,<:BandedMatrix}]
, BandedLU
, etc. for BLAS banded matrix support. 1 or 2 should be pretty easy to do. 3 has some issues:
gbmm!
which implements banded * banded multiplication (which annoyingly is not in BLAS or LAPACK).CC @ctkelley
more operations can take advantage of such structures
Like what? If you have a banded matrix, why not just use BandedMatrices.jl?
I'll let @ChrisRackauckas answer that since he's pushing for this.
I don't think that BandedMatrices should be redefining basic functions in Base. There are cases where Base could/should be using banded matrices as the output to give the user something more optimized than the current implementation. Things that come to mind:
factorize
on dense and sparse currently make a lot of specializations, like if it's tridiagonal, but don't check and use general bandedness as a possible subtype even though there are fast LU factorizations on the object. That seems to be what's going on here: https://discourse.julialang.org/t/sparse-solve-vs-bandedmatrix-time-and-allocation-surprise/45119 . You can't fix this without redefining factorize
.LU
factorization types on some of the structured matrices come out to be banded matrices, but Julia currently uses dense matrices (someone else might know this case).But these are the kinds of things where BandedMatrices could do a bunch of type-piracy could make a few things faster, but pirating factorize
from a package and some lu
overloads to change storage doesn't seem like a good idea.
Most helpful comment
Like what? If you have a banded matrix, why not just use BandedMatrices.jl?