Next: Iterative Techniques, Previous: Basics, Up: Sparse Matrices
Octave includes a poly-morphic solver for sparse matrices, where the exact solver used to factorize the matrix, depends on the properties of the sparse matrix itself. Generally, the cost of determining the matrix type is small relative to the cost of factorizing the matrix itself, but in any case the matrix type is cached once it is calculated, so that it is not re-determined each time it is used in a linear equation.
The selection tree for how the linear equation is solve is
spparms ("bandden")
continue, else goto 4.
The band density is defined as the number of non-zero values in the matrix
divided by the number of non-zero values in the matrix. The banded matrix
solvers can be entirely disabled by using spparms to set bandden
to 1 (i.e. spparms ("bandden", 1)
).
The QR solver factorizes the problem with a Dulmage-Mendhelsohn, to separate the problem into blocks that can be treated as over-determined, multiple well determined blocks, and a final over-determined block. For matrices with blocks of strongly connected nodes this is a big win as LU decomposition can be used for many blocks. It also significantly improves the chance of finding a solution to over-determined problems rather than just returning a vector of NaN's.
All of the solvers above, can calculate an estimate of the condition number. This can be used to detect numerical stability problems in the solution and force a minimum norm solution to be used. However, for narrow banded, triangular or diagonal matrices, the cost of calculating the condition number is significant, and can in fact exceed the cost of factoring the matrix. Therefore the condition number is not calculated in these cases, and Octave relies on simpler techniques to detect singular matrices or the underlying LAPACK code in the case of banded matrices.
The user can force the type of the matrix with the matrix_type
function. This overcomes the cost of discovering the type of the matrix.
However, it should be noted incorrectly identifying the type of the matrix
will lead to unpredictable results, and so matrix_type
should be
used with care.
Estimate the 2-norm of the matrix a using a power series analysis. This is typically used for large matrices, where the cost of calculating the
norm (
a)
is prohibitive and an approximation to the 2-norm is acceptable.tol is the tolerance to which the 2-norm is calculated. By default tol is 1e-6. c returns the number of iterations needed for
normest
to converge.
Estimate the 1-norm condition number of a matrix matrix A using t test vectors using a randomized 1-norm estimator. If t exceeds 5, then only 5 test vectors are used.
If the matrix is not explicit, e.g. when estimating the condition number of a given an LU factorization,
condest
uses the following functions:
- apply
A*x
for a matrixx
of size n by t.- apply_t
A'*x
for a matrixx
of size n by t.- solve
A \ b
for a matrixb
of size n by t.- solve_t
A' \ b
for a matrixb
of size n by t.The implicit version requires an explicit dimension n.
condest
uses a randomized algorithm to approximate the 1-norms.
condest
returns the 1-norm condition estimate est and a vector v satisfyingnorm (A*v, 1) == norm (A, 1) * norm (
v, 1) /
est. When est is large, v is an approximate null vector.References:
- Nicholas J. Higham and Françoise Tisseur, "A Block Algorithm for Matrix 1-Norm Estimation, with an Application to 1-Norm Pseudospectra." SIMAX vol 21, no 4, pp 1185-1201. http://dx.doi.org/10.1137/S0895479899356080
- Nicholas J. Higham and Françoise Tisseur, "A Block Algorithm for Matrix 1-Norm Estimation, with an Application to 1-Norm Pseudospectra." http://citeseer.ist.psu.edu/223007.html
See also: norm, cond, onenormest.
Sets or displays the parameters used by the sparse solvers and factorization functions. The first four calls above get information about the current settings, while the others change the current settings. The parameters are stored as pairs of keys and values, where the values are all floats and the keys are one of the strings
- spumoni Printing level of debugging information of the solvers (default 0)
- ths_rel Included for compatibility. Not used. (default 1)
- ths_abs Included for compatibility. Not used. (default 1)
- exact_d Included for compatibility. Not used. (default 0)
- supernd Included for compatibility. Not used. (default 3)
- rreduce Included for compatibility. Not used. (default 3)
- wh_frac Included for compatibility. Not used. (default 0.5)
- autommd Flag whether the LU/QR and the '\' and '/' operators will automatically use the sparsity preserving mmd functions (default 1)
- autoamd Flag whether the LU and the '\' and '/' operators will automatically use the sparsity preserving amd functions (default 1)
- piv_tol The pivot tolerance of the UMFPACK solvers (default 0.1)
- sym_tol The pivot tolerance of the UMFPACK symmetric solvers (default 0.001)
- bandden The density of non-zero elements in a banded matrix before it is treated by the LAPACK banded solvers (default 0.5)
- umfpack Flag whether the UMFPACK or mmd solvers are used for the LU, '\' and '/' operations (default 1)
The value of individual keys can be set with
spparms (
key,
val)
. The default values can be restored with the special keyword 'defaults'. The special keyword 'tight' can be used to set the mmd solvers to attempt for a sparser solution at the potential cost of longer running time.
Calculates the structural rank of a sparse matrix s. Note that only the structure of the matrix is used in this calculation based on a Dulmage-Mendelsohn to block triangular form. As such the numerical rank of the matrix s is bounded by
sprank (
s) >= rank (
s)
. Ignoring floating point errorssprank (
s) == rank (
s)
.See also: dmperm.
Performs a symbolic factorization analysis on the sparse matrix s. Where
- s
- s is a complex or real sparse matrix.
- typ
- Is the type of the factorization and can be one of
sym
- Factorize s. This is the default.
col
- Factorize s
' *
s.row
- Factorize s
*
s'
.lo
- Factorize s
'
- mode
- The default is to return the Cholesky factorization for r, and if mode is 'L', the conjugate transpose of the Cholesky factorization is returned. The conjugate transpose version is faster and uses less memory, but returns the same values for count, h, parent and post outputs.
The output variables are
- count
- The row counts of the Cholesky factorization as determined by typ.
- h
- The height of the elimination tree.
- parent
- The elimination tree itself.
- post
- A sparse boolean matrix whose structure is that of the Cholesky factorization as determined by typ.
For non square matrices, the user can also utilize the spaugment
function to find a least squares solution to a linear equation.
Creates the augmented matrix of a. This is given by
[c * eye(m, m),a; a', zeros(n, n)]This is related to the leasted squared solution of a
\\
b, bys * [ r / c; x] = [b, zeros(n, columns(b)]where r is the residual error
r = b - a * xAs the matrix s is symmetric indefinite it can be factorized with
lu
, and the minimum norm solution can therefore be found without the need for aqr
factorization. As the residual error will bezeros (
m,
m)
for under determined problems, and example can bem = 11; n = 10; mn = max(m ,n); a = spdiags ([ones(mn,1), 10*ones(mn,1), -ones(mn,1)],[-1,0,1], m, n); x0 = a \ ones (m,1); s = spaugment (a); [L, U, P, Q] = lu (s); x1 = Q * (U \ (L \ (P * [ones(m,1); zeros(n,1)]))); x1 = x1(end - n + 1 : end);To find the solution of an overdetermined problem needs an estimate of the residual error r and so it is more complex to formulate a minimum norm solution using the
spaugment
function.In general the left division operator is more stable and faster than using the
spaugment
function.
[1] The CHOLMOD, UMFPACK and CXSPARSE packages were written by Tim Davis and are available at http://www.cise.ufl.edu/research/sparse/