![]() Although this means that the symbolic analysis cannot give as accurate a prediction as in the Cholesky case, there are effective symbolic analysis methods for the LU decomposition.Ĭonsider the LU decomposition of \(A\) with partial pivoting, i.e., \(PA=LU\), where \(P\) is a permutation matrix describing the numerical pivoting. When performing an LU decomposition on the other hand, one may need to perform pivoting to maintain numerical stability. In the case of Cholesky factorization, the symbolic and numerical concerns are nicely separated. Other symbolic analysis operations include obtaining main characteristics of the pattern of the Cholesky factor (such as the number of nonzeros for each row and column), and determining the pattern of the Cholesky factor using the elimination tree (Schreiber, 1982). Fortunately, there are many heuristics to get an ordering for reducing the fill-in significantly, including the minimum degree (Tinney and Walker, 1967) and the nested dissection (George, 1973) heuristics and their variants. Minimizing the fill-in in Cholesky factorization has been shown to be NP-complete by Yannakakis (1981). Using this equivalence, ordering methods can be viewed as ordering the vertices of the graph \(G\) so that the same ordering determines the pivotal sequence for the matrix factorization. , n-1\) is used to reduce all entries below the \(i\)th diagonal entry to zero, corresponds to making the neighbours of vertex \(i\) a clique, and then removing the vertex \(i\) as described by Parter (1961) and Rose (1970). Below some tasks to be performed during the symbolic analysis are discussed.Ĭonsider the sparse symmetric positive definite matrixĪnd its Cholesky factor \(L_A\) is given by (up to four units of accuracy) ![]() The symbolic analysis can usually be viewed using the graph models for sparse matrices. Meeting these requirements usually entails first performing a symbolic analysis on the matrix in order to predict and reduce the memory and run time requirements for the subsequent numerical factorization. The added difficulty in the sparse case is that the factorization should try to avoid operating on the zeros of the matrix and should keep the factors as sparse as possible. The factorizations discussed above and their use in solving the systems are mathematically equivalent to their dense counterparts. Those with \(L\) are solved by forward substitution, those with \(U\) are solved by back substitution, those with \(D\) are solved by multiplying by the explicit inverse of the \(1\times 1\) and \(2\times 2\) blocks, and those with \(Q\) are solved by multiplying the right-hand side by \(Q^T\). The linear systems with \(L\), \(U\), \(D\), and \(Q\), as defined above, are easy to solve. Here, \(Q\) is an \(m\times m\) orthogonal matrix, and \(R\) is an \(m\times n\) upper trapezoidal matrix. For an \(m\times n\) rectangular matrix, with \(m\ge n\), its QR decomposition \(A=QR\) is computed.For an \(n\times n\) symmetric indefinite matrix, its \(LDL^T\) decomposition \(A = LDL^T\) is computed where \(D\) is a block diagonal matrix (with blocks of order 1 or 2), and \(L\) is a unit lower triangular matrix.For a \(n\times n\) unsymmetric matrix \(A\), its LU decomposition \(A = LU\) is computed where \(L\) is a unit lower triangular matrix, and \(U\) is an upper triangular matrix.Often an \(LDL^T\) decomposition is used to avoid taking square roots and to avoid problems with matrices that are nearly singular. For an \(n\times n\) symmetric positive definite matrix, the Cholesky factorization \(A=LL^T\) is usually computed, where \(L\) is a lower triangular (sparse) matrix.Systems of equations with the factor matrices as aĬoefficient matrix are very easy to solve.ĭepending on the properties of the matrix \(A\), different factorizations are used: In direct methods, the matrix \(A\) is factorized into a product of other
0 Comments
Leave a Reply. |