Hello Julia devs and community,
I just wanted to share a very simple case where the factorize function of LinearAlgebra fails. Here I try to factorize a full-rank 4x4 matrix of Float64s. The factorize method tries to construct an LDLt factorization, but as you can see, there are many NaN entries in the factorization, and actually using the factorized matrix results in a matrix of NaN.
```using LinearAlgebra
A = [0.0 -1.0 0.0 0.0
-1.0 0.0 0.0 0.0
0.0 0.0 0.0 -1.0
0.0 0.0 -1.0 0.0]
Afac = factorize(A) # produces LDLt with many NaN entries
Afac \ A # results in 4x4 matrix of NaN
A \ A # backslash gives correct result
```
Actually I just realized this matrix happens to have the special property that it is its own inverse. Perhaps this is related to the issue somehow?
Strange behaviour, but factorize(Symmetric(A))
works.
And with matrix
julia> B = [0.0 -1.0 0.0 0.0
-1.0 0.0 0.0 0.0
0.0 0.0 0.0 -1.0
0.0 0.0 -1.000001 0.0]
factorize(B)
works.
This function is wrong for symmetric tridiagonal matrices with any zero diagonal entry, like yours. It is dividing by zero, and the rest follows. It should be replaced with a stable algorithm.
Edit: the literature seems to indicate that determining a pivot size is needed (with Bunch's algorithm or otherwise). The implemented algorithm ignores that step, effectively always choosing a pivot size of 1. For the OP's case, it should be 2. Some NLA guru should take over from here :D
Thank you for your replies. I may be using factorize
somewhat naively; in my application, all I know ahead of time is that I will be solving several linear equations of the form Ax=b
for full rank and square A
. I was under the impression that factorize
would choose an appropriate factorization for the matrix A
... it was only by happenstance that in one iteration of my task A
had this special symmetric and tridiagonal structure. You can imagine my confusion when the calculation was working until suddenly it didn't!
I guess we need a more fine-grained test in factorize
to make sure we call an appropriate factorization. It seems impossible, however, to tell a priori whether ldlt
will succeed (EDIT: for the pivot-1 case). Maybe the simplest thing to do is to throw an error, and put the ldlt
branch in factorize
in a try-catch-block, with bunchkaufman
as the fallback?
The LDL factorization for symmetric tridiagonal matrices IS numerically stable with the pivoting size check.
I'm not talking about the algorithm, but about the specific implementation. For instance, currently, for a SymTridiagonal
we wrap an LDLT
object around the modified SymTridiagonal
(which is constructed in factorize
). If you need diagonal elements of dimension sometimes-1 and sometimes-2, then what type is that? Probably, you would need a TriDiagonal
, besides the UnitLowerTriangular
. But you can't know in advance, IIUC, which one you will need. Very interesting read, @raminammour, by the way.
If I read http://www.netlib.org/lapack/explore-html/d0/d2f/group__double_p_tcomputational_gad408508a4fb3810c23125995dc83ccc1.html#gad408508a4fb3810c23125995dc83ccc1 correctly, then LAPACK throws an error if any of the (new) diagonal elements is smaller than or equal to 0. This method, however, is listed in the category of positive definite matrices, and I couldn't find the pivot-2 method, whose existence seemed to be indicated in Higham's paper.
Most helpful comment
This function is wrong for symmetric tridiagonal matrices with any zero diagonal entry, like yours. It is dividing by zero, and the rest follows. It should be replaced with a stable algorithm.
Edit: the literature seems to indicate that determining a pivot size is needed (with Bunch's algorithm or otherwise). The implemented algorithm ignores that step, effectively always choosing a pivot size of 1. For the OP's case, it should be 2. Some NLA guru should take over from here :D