Currently, when you take the diagonal of a sparse matrix, you get a dense vector:
julia> S = sprand(10, 10, 0.1)
10×10 SparseMatrixCSC{Float64,Int64} with 10 stored entries:
[3 , 1] = 0.284124
[2 , 2] = 0.627361
[9 , 5] = 0.339561
[5 , 6] = 0.848555
[1 , 8] = 0.538286
[3 , 8] = 0.703678
[5 , 8] = 0.0960039
[8 , 8] = 0.00592429
[5 , 9] = 0.170894
[1 , 10] = 0.324079
julia> diag(S)
10-element Array{Float64,1}:
0.0
0.627361
0.0
0.0
0.0
0.0
0.0
0.00592429
0.0
0.0
This should arguably be a sparse vector since for a generic sparse matrix, the diagonal is sparse.
As usual with sparse matrices, it depends on the sparsity pattern. Sparse matrices from physical simulations would typically have a banded structure such that the diagonal is dense. For sparse matrix representations of graphs, it is probably less likely. We could do a survey based on Tim Davis' sparse matrix collection.
Putting dense data into a SparseVector
is a waste of memory, but only by a smallish factor, while putting sparse data into a Vector
may be a gigantic waste of memory (although contrary to the situation with matrices, it is less likely to exceed physical memory). So returning a sparse result would be the safer choice, wouldn't it?
I guess that argument applies to all arrays, i.e. even dense arrays. The safe choice that minimizes the worst-case relative memory waste for a dense matrix is to always produce a sparse vector. If the vector happens to have no zeros, there is a 2x waste. Returning a Vector
if all the entries are zero is Inf x
(except for the metadata) waste. Still, we think A[:,1]
should be dense for A::Matrix
because we believe that the entries typically will be nonzeros. So there is a trade-off that depends on what you consider a typical sparsity pattern. I don't have strong opinions, though. Just wanted to point out that this might be expensive for a large and very common class of sparse matrices.
I think dense is a reasonable choice here. My impression is that the sparse world generally considers O(n) size small enough.
Given that our matrices are CSC, we're already storing a dense list of size O(n). The story might be different with a COO format.
A SparseVector result would not be O(n).
off-diagonals using the 2-arg form aren't usually expected to be dense the way the main diagonal will more commonly be. so there's a pretty good argument for returning a sparse vector from diag(A::SparseMatrixCSC, k)
, and then it would be a bit strange for diag(A)
and diag(A, 0)
to return different types
julia> diag(A)
5-element Array{Float64,1}:
0.774844
0.0
0.443253
0.0
0.0
julia> diag(A,0)
5-element SparseVector{Float64,Int64} with 2 stored entries:
[1] = 0.774844
[3] = 0.44325
I got hit by this inconsistency in #22925
Would be great if a decision could be made here about which direction to go for consistency and make the different forms of diag
match. We need a bat signal for linalg people...
The current situation is not completely unreasonable, though, since the diagonal will often be quite dense whereas many off-diagonals will be very sparse but I can also see that it would be nice if the two methods returned the same type.
Following up on @martinholters comment, I'd argue that the safest choice would be the one with the smallest absolute worst case memory usage. It is not the only concern or necessarily the main concern but I'd also argue that diag(A)
is much more common than diag(A, k!=0)
.
Bottomline is that we disagree. Specifying a return type seems like a general problem. Maybe a solution could be to define spdiag
?
Possible alternative: make diag
be always dense and make it easier to construct diagonal views either through an indexing syntax or some other function. That'd give an O(n) memory solution with fast access and an O(1) memory solution with (deferred) access complexity based on the source array.
diagonal views would be a good idea, but indexing into that would be more expensive than indexing into an eager sparse or dense diagonal slice (copy)
diag(A)
should behave the same as diag(A,0)
– I don't care if that's sparse or dense, but it should be consistent.
@tkelman makes the argument that if we go with always sparse:
When someone writes full(diag(A))
we could potentially eventually optimize away the creation of the sparse vector and just write entries directly to the dense output vector without changing any semantics, giving the best of both worlds. The optimization in the other direction is much harder.
Resolved: always return a sparse vector here.
Might be worth browsing through the thumbnails at https://www.cise.ufl.edu/research/sparse/matrices/list_by_id.html to get an idea of typical density patterns of diagonals of typical sparse matrices.
Yes, it's definitely true that the diagonal is often dense. Of course, if O(n) storage is ok for your data (as opposed to O(n^2)) then 2n is still ok, and that's the worst case for a dense diagonal. I would add that in the applications where I've used sparse matrices a lot, the diagonal is almost never dense (they're also usually not square, which may be related).
Looks like diag(::SparseMatrixCSC, ::Integer)
is currently dispatching to the AbstractArray method here https://github.com/JuliaLang/julia/blob/8c7f01e974de7ae779f22343086fdd400efe16f2/base/linalg/dense.jl#L279
so might need to generalize the SpDiagIterator here https://github.com/JuliaLang/julia/blob/8c7f01e974de7ae779f22343086fdd400efe16f2/base/sparse/sparsematrix.jl#L3385-L3402
to handle off-diagonals. Haven't looked at it very closely but I think that shouldn't be too complicated?
Just tested this and deleting https://github.com/JuliaLang/julia/blob/8c7f01e974de7ae779f22343086fdd400efe16f2/base/sparse/sparsematrix.jl#L3415 seems to be all that is required.
That would fix the return type, but the fallback is probably less efficient? Worth comparing.
Most helpful comment
I think dense is a reasonable choice here. My impression is that the sparse world generally considers O(n) size small enough.