Graduate

GraduateNumerical AnalysisNumerical Linear Algebra


Preconditioning Techniques


Preconditioning is a powerful technique used in numerical linear algebra to speed up the convergence of iterative solvers for solving linear systems of equations. It involves transforming a given system into a form that is more amenable to numerical computation. In many real-world applications, especially those involving large and sparse matrices, direct methods such as Gaussian elimination become prohibitively expensive in terms of computational resources and time. This is where iterative methods, aided by preconditioning, come into the picture.

What is preconditioning?

In simple terms, preconditioning is the process of applying transformations to a linear system, thereby creating another system that has the same solution, but is easier to solve by iterative methods. Mathematically, consider this system:

Ax = b

The idea is to find a matrix M, called the preconditioner, such that the modified system can be solved:

M -1 Ax = M -1 b

This speeds up convergence. The choice of M is important; it should be close to A and easy to invert.

Why use preconditioning?

Without preconditioning, iterative methods may converge very slowly or not converge at all. One important aspect that a preconditioner must address is the condition number of the matrix. The condition number of a matrix, in simple terms, measures how sensitive the solution of a system of linear equations is to changes in the coefficients or the right-hand side. A smaller condition number usually indicates faster convergence. Preconditioning helps to reduce the condition number.

Types of preconditioning

Left preconditioning

In left preconditioning, we multiply the system Ax = b from the left by M -1, resulting in:

M -1 Ax = M -1 b

The aim of this approach is to solve the coefficient matrix A directly and attempt to solve the transformed system efficiently.

Right preconditioning

In right preconditioning, we multiply the right side by the left side resulting in:

A(M -1 y) = b

We then solve for y = Mx. This method focuses on transforming the solution vector directly.

Partitioned preconditioning

Split preconditioning involves using two preconditioners, one on each side:

M 1 -1 AM 2 -1 z = M 1 -1 b

Here, we solve for z = M 2 x by applying more complicated transformations to increase efficiency.

Common preconditioners

1. Jacobi preconditioning

The Jacobi preconditioner uses the diagonal elements of A to create a preconditioner matrix M:

M = diag(A)

Here, diag(A) denotes the diagonal matrix formed from the diagonal elements of A. This approach is simple and easy to implement.

2. Gauss-Seidel preconditioning

The Gauss–Seidel preconditioner is similar but it also considers the lower triangular part of A:

M = (D + L)

Where D is the diagonal part and L is the lower triangular part of A.

3. Incomplete LU (ILU) preconditioning

ILU preconditioning uses an approximate LU factorization of the matrix A. Instead of computing the full LU factorization, ILU computes an approximation:

A ≈ LU

where L and U are lower and upper triangular matrices, respectively. ILU is particularly suitable for sparse matrices because it preserves sparsity.

4. Incomplete Cholesky preconditioning

Similar to ILU, the incomplete Cholesky factorization is used for symmetric positive definite matrices:

A ≈ LL T

The LL T M with L being a lower triangular matrix.

5. Additive Schwarz preconditioning

This method divides the original domain into overlapping subdomains, applies a preconditioner to each subdomain, and combines their effects additively. It is popular in parallel computing environments.

Visual depictions

A M M - 1A

In the illustration, the blue box represents the matrix A, the green box represents the preconditioner M, and the red box represents the preconditioned matrix M -1 A, which is easier and faster to solve.

Choosing a preconditioner

Choosing an effective preconditioner requires balancing the benefits of better convergence with the computational effort required to apply the preconditioner at each iteration. An ideal preconditioner should have the following properties:

  • Easier to compute and apply (won't require significant additional computational resources).
  • Effective in reducing the state number of the system.
  • Applicable to specific matrix properties (e.g., sparsity, homogeneity).

Practical considerations

When applying preconditioning techniques in practice, several factors must be considered. The effectiveness of a preconditioning method may depend on the size, structure of the matrix, and the specific problem domain. In some cases, custom or problem-specific preconditioning techniques can be developed to maximize efficiency.

Case studies and example applications

An example application might involve solving systems arising from finite element discretization in engineering simulations. Here, due to the nature of the discretization, the matrices can be large, sparse, and poorly conditioned. Applying an ILU preconditioner or a multigrid method can dramatically improve the performance of the solver.

Conclusion

Preconditioning is a crucial step in making iterative solvers feasible for large and complex systems of linear equations. It turns an otherwise difficult problem into a computationally tractable one. By using a suitable preconditioner such as Jacobi, Gauss-Seidel, ILU or Incomplete Cholesky, the efficiency in computations can be improved significantly. As research progresses, preconditioning techniques continue to evolve, providing more efficient solutions to real-world numerical problems.


Graduate → 6.2.4


U
username
0%
completed in Graduate


Comments