Graduate

GraduateNumerical AnalysisNumerical Solutions of Equations


Iterative Methods


Introduction

Iterative methods are a fundamental tool in numerical analysis that helps find solutions to equations that may otherwise be difficult or impossible to solve analytically. These methods are particularly useful for solving nonlinear equations and systems of equations. The main idea behind iterative methods is to generate a sequence of approximations that converge on the true solution.

The need for iterative methods

Consider a simple equation in a single variable:

f(x) = 0

Solving this equation analytically can be challenging, especially if the function f(x) is complex or non-linear. In such cases, iterative methods provide a numerical approach to finding approximate solutions. Iterative methods are particularly efficient when:

  • This equation is complex and has no analytical solution.
  • We need to solve large systems of equations quickly.
  • We accept approximate solutions for practical applications.

The basic concept of recursion

The core of iterative methods is the process of iteration, which involves repeating a certain calculation to refine an approximate solution. The general idea can be represented as follows:

x_{n+1} = g(x_n)

Here, x_n is the current approximation, and x_{n+1} is the next approximation. The function g(x) is an iterative function designed to bring successive approximations closer to the true solution.

Example: fixed-point iteration

The simplest example of an iterative method is fixed-point iteration. Given the equation f(x) = 0, we reformulate it to the form:

x = g(x)

Then, we use the function g(x) to move toward the solution:

x_{n+1} = g(x_n)

Suppose you want to find the roots of f(x) = cos(x) - x Rewrite it as follows:

x = cos(x)

Start with the initial guess x_0 and repeat:

x_{1} = cos(0) = 1
x_{2} = cos(1) ≈ 0.5403
x_{3} = cos(0.5403) ≈ 0.8576

Continue this process until the changes are negligible. Repeat until:

|x_{n+1} - x_n| < ε

where ε is a chosen small tolerance level.

Convergence of iterative methods

For an iterative method to be useful, it must converge to the exact solution. Convergence means that as you iterate, your approximations get closer and closer to the true solution. Factors that affect convergence include:

  • Choice of initial estimate.
  • The nature of the function g(x).
  • Properties of the equation being solved.

In general, the iterative method converges when the magnitude of the derivative of g(x) at a fixed point is less than one:

|g'(x)| < 1

Example

To find the square root of 3, consider g(x) = 1/2 (x + 3/x) It can be shown that:

g'(x) = 1/2 (1 - 3/x^2)

If 1/2 (1 - 3/x^2) < 1 near the origin, the method will converge.

General iterative methods

1. Newton-Raphson method

The Newton-Raphson method is one of the most popular iterative techniques because it converges rapidly for well-behaved functions. The iterative formula is:

x_{n+1} = x_n - f(x_n) / f'(x_n)

A major advantage of this method is that it converges quadratically as one approaches the origin, i.e. the number of correct digits approximately doubles at each step.

Example

To solve f(x) = x^2 - 2 :

f(x) = x^2 - 2 f'(x) = 2x x_{n+1} = x_n - (x_n^2 - 2) / (2x_n)

Start with x_0 = 1 :

x_{1} = 1 - ((1)^2 - 2) / (2*1) = 1.5 x_{2} = 1.5 - ((1.5)^2 - 2) / (2*1.5) = 1.4167

2. Secant method

The secant method is similar to the Newton–Raphson method, but does not require calculating the derivative f'(x), making it useful when derivatives are difficult to calculate:

x_{n+1} = x_n - f(x_n) * (x_n - x_{n-1}) / (f(x_n) - f(x_{n-1}))

Example

To solve f(x) = x^2 - 4, start with two initial values:

x_0 = 2, x_1 = 3 x_{2} = 3 - (3^2 - 4) * (3 - 2) / ((3^2 - 4) - (2^2 - 4))

Continue doing iterations.

3. Gauss-Seidel method for equation systems

When dealing with a system of linear equations, iterative methods such as the Gauss-Seidel method help find solutions efficiently. Consider this system:

a11x + a12y = b1 a21x + a22y = b2

You can iterate over each variable, and update it using the most current values:

x^{k+1} = (b1 - a12y^k) / a11 y^{k+1} = (b2 - a21x^{k+1}) / a22

Start with an initial guess for x and y and repeat.

Benefits and challenges of iterative methods

Iterative methods offer several advantages:

  • Efficient for large systems where direct methods are computationally expensive.
  • Suitable for real-time applications where quick solutions are required.
  • Requires less memory than direct methods.

However, these also come with challenges:

  • Sensitively dependent on the initial guess for convergence.
  • May not converge for certain functions or complex equations.
  • Selection of appropriate method depending upon the context of the problem is important.

Closing thoughts

Iterative methods are an important component of numerical analysis, having wide applications in engineering, physics, and computer science. Mastering these methods allows solving complex equations efficiently and effectively. The choice of method depends on the specific problem and constraints such as availability of resources and required accuracy. As computational resources and techniques advance, iterative methods continue to evolve, offering new ways to tackle mathematical challenges.


Graduate → 6.1.2


U
username
0%
completed in Graduate


Comments