PHD

PHDApplied MathematicsNumerical Analysis


Error Analysis


In applied mathematics, numerical analysis is a very powerful tool, but it is important to understand how accurate our numerical solutions are. Error analysis helps us understand the difference between the actual solution and the solution we approximate. Errors can come from a variety of sources: truncation, rounding errors, and more. In this article, we will explore what error analysis is and look at detailed examples to comprehensively understand this concept.

What is error analysis?

Error analysis involves finding out the nature, significance, and behavior of errors in numerical calculations. When calculating numerically, errors are inevitable. Therefore, analyzing how these errors affect the result of the calculation is important to improve accuracy and reliability.

Types of errors in numerical analysis

Several types of errors may arise in numerical analysis:

  • Truncation errors: These occur when an infinite process is approximated by a finite process. For example, using a finite number of terms in a Taylor series expansion.
  • Rounding errors: These arise due to rounding in finite precision arithmetic. For example, rounding a number like 3.333... to 3.33.
  • Absolute Error: The difference between the true value and the approximation. Mathematically, Absolute Error = |True Value - Approximated Value|.
  • Relative Error: The ratio of the absolute error to the true value. Usually expressed in percentage. It is represented as: Relative Error = (frac{|True Value - Approximated Value|}{|True Value|}).

Why is error analysis important?

Error analysis helps to understand the accuracy, convergence and stability of a numerical method. By analyzing the errors, we can identify whether the numerical method will converge to the correct solution and how quickly it will do so. Stability and accuracy in calculations ensure reliable results in applications such as simulation, engineering design, etc.

Visualization of errors in numerical analysis

Let's look at an example of a simple numerical method for solving equations, the Euler method, which is often used to solve ordinary differential equations (ODEs).

dydx = f(x, y)
y(x0) = y0
y(xn) ≈ y0 + h * (f(x0, y0) + f(x1, y1) + ... + f(xn-1, yn-1))

Let's consider the differential equation: dy/dx = -x * y; y(0) = 1. Our goal is to approximate the solution at different points. Suppose we use the Euler method with a step size of 0.1.

// Using Euler's formula: for (int i=0; i < number_of_steps; i++) { y[i+1] = y[i] + h * (-x[i] * y[i]); x[i+1] = x[i] + h; }
Exact Solution Approximation with h = 0.1

The diagram we see shows the actual solution and the approximate solution at a small step size. Notice how the approximation deviates from the actual curve.

Break down error

Let's analyze the error using a simple function log(x).

The Taylor series for log(1+x) is:

log(1+x) = x - x²/2 + x³/3 - x⁴/4 + ...

Using only the first two terms log(1+x) ≈ x - x²/2, the truncation error is:

Truncation Error = (x³/3) - (x⁴/4) + ...

Let us calculate this error for x = 0.1:

Truncation Error = 0.1³/3 - 0.1⁴/4 + ... ≈ 0.00033333 - 0.00002500 + ...

When presenting these calculated values with limited precision, rounding errors may also occur, further contributing to the total error.

Now, let's calculate both the absolute and relative errors for this approximation:

True Value: log(1.1) ≈ 0.09531 Approximated Value: 0.1 - 0.01/2 = 0.095
Absolute Error = |0.09531 - 0.095| = 0.00031
Relative Error = |0.00031/0.09531| ≈ 0.00325 or 0.325%

Understanding convergence and stability

Two important concepts when we look at error analysis in numerical solutions are convergence and stability:

  • Convergence: The numerical method is said to converge if it tends towards the exact solution as the number of approximation steps or the accuracy of calculation increases. By decreasing the step size (h) in Euler’s method, the approximation should get closer to the exact solution.
  • Stability: If small changes in the initial values or parameters lead to small changes in the result, then the numerical method is stable. In other words, the errors do not grow very quickly during the calculations.

Example

Consider the iterative method:

x_{n+1} = g(x_n)

For convergence, assume that g has a root s such that g(s) = s and the derivative (g') at s satisfies (|g'(s)| < 1).

Convergence test: Let us take (g(x) = cos(x)) and the initial approximation (x_0 = 0).

x_{next} = cos(x_{current})

Calculation:

x_1 = cos(0) = 1
x_2 = cos(1) ≈ 0.54
x_3 = cos(0.54) ≈ 0.85

The iterations seem to converge to around 0.74 - the actual root is about 0.739085.

Summary

Error analysis plays a key role in numerical analysis, as it helps us refine our calculations and ensure that the solutions are as accurate as possible. This includes understanding the types of errors, breaking them down into their origins, convergence behavior, and stability implications.

The visual examples and simple analysis provided here should provide a basic understanding. However, more in-depth exploration should always be done when considering not only different mathematical problems and equations but also different methods with unique error behavior.


PHD → 9.1.3


U
username
0%
completed in PHD


Comments