Introduction and Error Analysis

Overview of numerical methods, mathematical modeling, programming concepts, and different types of errors in numerical computation.

Mathematical Modeling

What is a Mathematical Model?

A mathematical model is defined broadly as a formulation or equation that expresses the essential features of a physical system or process in mathematical terms. In a general sense, it can be represented as a functional relationship of the form: Dependent variable=f(independent variables, parameters, forcing functions)\text{Dependent variable} = f(\text{independent variables, parameters, forcing functions})

Historical Context

The roots of numerical methods trace back to ancient civilizations, where Babylonian and Egyptian mathematicians developed algorithms for computing square roots and solving linear equations. The formalization of numerical analysis, however, began with the development of calculus and the need to approximate solutions to complex differential equations. With the advent of modern computers, the field exploded, allowing for the rapid execution of iterative algorithms that define modern numerical modeling.
Mathematical models are the foundation of engineering problem solving. In many cases, these models cannot be solved analytically (exactly), which is where numerical methods come into play. Numerical methods are techniques by which mathematical problems are formulated so that they can be solved with arithmetic operations.

Big-O Notation and Convergence

In numerical analysis, the efficiency and accuracy of algorithms are often described using "Big-O" notation, O(hn)O(h^n). It expresses the truncation error or the rate of convergence as a function of the step size hh.

Convergence Rates

  • Linear Convergence: The error is roughly proportional to the step size (O(h)O(h)). Halving the step size halves the error.
  • Quadratic Convergence: The error is proportional to the square of the step size (O(h2)O(h^2)). Halving the step size reduces the error by a factor of four, indicating much faster convergence.

Accuracy and Precision

Accuracy

Refers to how closely a computed or measured value agrees with the true value.

Precision

Refers to how closely individual computed or measured values agree with each other.

Note

Numerical methods should be sufficiently accurate or unbiased to meet the requirements of a particular engineering problem. They also should be precise enough for adequate engineering design.

Significant Digits

Significant digits (or significant figures) of a number are those that can be used with confidence. They correspond to the number of certain digits plus one estimated digit. For example, if a measurement is given as 1.2341.234, it has four significant digits, implying confidence in 1,2,1, 2, and 33, while 44 is an estimate. Identifying significant digits is crucial in bounding the acceptable error in numerical computations.

Error Definitions

Numerical errors arise from the use of approximations to represent exact mathematical operations and quantities. We classify errors in two primary ways: by their relation to the "true" value, and by their relation to the magnitude of the value being evaluated.

True vs. Approximate Error

  • True Error (EtE_t): The difference between the exact (true) analytical value and the approximated value.
Et=True ValueApproximationE_t = \text{True Value} - \text{Approximation}
  • Approximate Error (EaE_a): When the true value is not known, the error can be estimated by the difference between the present approximation and the previous approximation during an iterative process.
Ea=Current ApproximationPrevious ApproximationE_a = \text{Current Approximation} - \text{Previous Approximation}

Absolute vs. Relative Error

  • Absolute Error: The magnitude of the error itself, which does not account for the order of magnitude of the true value.
Et=True ValueApproximation|E_t| = |\text{True Value} - \text{Approximation}|
  • True Fractional Relative Error (εt\varepsilon_t): The true error normalized to the true value. Usually expressed as a percentage:
εt=True ValueApproximationTrue Value×100%\varepsilon_t = \left| \frac{\text{True Value} - \text{Approximation}}{\text{True Value}} \right| \times 100\%
  • Approximate Relative Error (εa\varepsilon_a): Used in iterative methods when the true value is unknown:
εa=Current Approx.Previous Approx.Current Approx.×100%\varepsilon_a = \left| \frac{\text{Current Approx.} - \text{Previous Approx.}}{\text{Current Approx.}} \right| \times 100\%
In iterative numerical methods, computations continue until the approximate relative error εa\varepsilon_a falls below a prespecified stopping criterion εs\varepsilon_s. To ensure the result is correct to at least nn significant digits, we set:
εs=(0.5×102n)%\varepsilon_s = (0.5 \times 10^{2-n})\%

Round-off Errors and Floating-Point Representation

Round-off errors originate from the fact that computers retain only a fixed number of significant figures during a calculation. Numbers such as π\pi, ee, or 7\sqrt{7} cannot be expressed with a fixed number of significant figures.

Caution

Because computers have a finite capacity to represent quantities, there is always a discrepancy between the exact value and the value stored in the machine. This discrepancy is the round-off error.
Computers represent numbers using floating-point systems consisting of a sign, a mantissa (or significand), a base, and an exponent. For example, in base-10: m10em \cdot 10^e. Because a computer stores a finite number of bits for the mantissa and exponent, numbers that require infinite digits (like π\pi, ee, or 1/31/3) must be truncated. This truncation happens in two primary ways:

Chopping vs. Rounding

  • Chopping: The digits beyond the fixed limit are simply discarded. This introduces a systematic bias because it always underestimates the magnitude of the true number.
  • Rounding: The number is approximated to the nearest representable value. If the discarded portion is greater than or equal to half the base, the last retained digit is increased by one. Rounding is generally preferred over chopping because it minimizes the maximum possible error and avoids systematic bias over many computations.

Machine Epsilon

The smallest positive number εmach\varepsilon_{mach} such that 1+εmach>11 + \varepsilon_{mach} > 1 in the computer's floating-point arithmetic. It bounds the relative error in representing a number.

Subtractive Cancellation (Loss of Significance)

A severe form of round-off error occurs when subtracting two nearly equal numbers. When this happens, the most significant digits cancel out, leaving only the less significant digits (which may just be noise or previous round-off errors) to represent the result.

Subtractive Cancellation

For example, subtracting x=0.123456x = 0.123456 and y=0.123444y = 0.123444 results in z=0.000012z = 0.000012. If these numbers were derived from measurements or prior calculations, the leading digits 0.12340.1234 might be highly accurate, but the trailing digits might be uncertain. The result zz relies entirely on those uncertain digits, meaning the relative error has catastrophically exploded. This phenomenon is known as subtractive cancellation or loss of significance.

Truncation Errors

Truncation errors are those that result from using an approximation in place of an exact mathematical procedure. A classic example is approximating a derivative with a finite divided difference equation or truncating an infinite series after a finite number of terms.
True Value=Approximation+Truncation Error\text{True Value} = \text{Approximation} + \text{Truncation Error}

Taylor Series

The Taylor series theorem states that any smooth function can be approximated as a polynomial. It provides a means to predict the value of a function at one point in terms of the function value and its derivatives at another point.
f(xi+1)=f(xi)+f(xi)h+f(xi)2!h2+f(xi)3!h3++f(n)(xi)n!hn+Rnf(x_{i+1}) = f(x_i) + f'(x_i)h + \frac{f''(x_i)}{2!}h^2 + \frac{f'''(x_i)}{3!}h^3 + \dots + \frac{f^{(n)}(x_i)}{n!}h^n + R_n
Where h=xi+1xih = x_{i+1} - x_i is the step size, and RnR_n is the remainder term to account for all terms from n+1n+1 to infinity. The remainder provides an exact estimate of the truncation error and is given by the Lagrange form:
Rn=f(n+1)(ξ)(n+1)!hn+1R_n = \frac{f^{(n+1)}(\xi)}{(n+1)!} h^{n+1}
Where ξ\xi is a point that lies somewhere between xix_i and xi+1x_{i+1}. Although ξ\xi is generally unknown, this formula allows us to bound the maximum possible error by determining the maximum value of the (n+1)(n+1)-th derivative on the interval [xi,xi+1][x_i, x_{i+1}].

Error Propagation

When values subject to uncertainty (e.g., measured variables with bounded errors) are used inside a mathematical model, the resulting errors propagate to the final output. Understanding this propagation is essential for estimating the overall uncertainty of the computed result.

First-Order Error Propagation

If a computed value yy is a function of multiple independent variables x1,x2,,xnx_1, x_2, \dots, x_n (i.e., y=f(x1,x2,,xn)y = f(x_1, x_2, \dots, x_n)), and each variable has an absolute error Δxi\Delta x_i, the absolute error in the result Δy\Delta y can be approximated using a first-order Taylor series expansion: Δyfx1Δx1+fx2Δx2++fxnΔxn\Delta y \approx \left| \frac{\partial f}{\partial x_1} \right| \Delta x_1 + \left| \frac{\partial f}{\partial x_2} \right| \Delta x_2 + \dots + \left| \frac{\partial f}{\partial x_n} \right| \Delta x_n This formula provides an upper bound on the propagated absolute error. It shows that the sensitivity of the function to a variable (its partial derivative) dictates how much an error in that variable will affect the final result.

Conditioning and Stability

Conditioning refers to the sensitivity of a mathematical problem to small changes in input data. A problem is well-conditioned if small changes in the input lead to small changes in the exact solution, and ill-conditioned if small changes lead to large changes. Stability, on the other hand, is a property of the numerical algorithm itself. A numerically stable algorithm does not excessively amplify errors (like round-off or truncation errors) during computation.

Blunders, Formulation, and Data Errors

Besides truncation and round-off errors, numerical computations can be affected by other types of errors:

Other Sources of Error

  • Blunders: Gross errors typically caused by human mistakes, such as programming bugs or incorrect data entry.
  • Formulation Errors: Arise from an incomplete mathematical model (e.g., neglecting friction in a dynamics problem).
  • Data Uncertainty: Errors in the input parameters due to inaccurate measurements or estimations.

Forward and Backward Error Analysis

Understanding how errors propagate requires formal analysis.

Error Analysis Types

  • Forward Error Analysis: Attempts to bound the error in the final result based on the errors in the input and the steps of the algorithm. It answers: "How wrong is the answer?"
  • Backward Error Analysis: Instead of asking how wrong the answer is, it asks: "For what perturbed input data is our computed answer exactly correct?" If the required perturbation is small (on the order of machine epsilon), the algorithm is considered backward stable.

Error Propagation and Total Numerical Error

When numerical operations are performed, errors from individual terms can propagate through the calculation. The total numerical error is the sum of truncation errors and round-off errors.

Note

There is an inherent trade-off in numerical computing: decreasing the step size hh reduces the truncation error but increases the number of computations, thereby increasing the total round-off error. The optimal step size minimizes the total numerical error.
Key Takeaways
  • Mathematical models express physical phenomena in mathematical terms.
  • Numerical methods convert complex math into simple arithmetic operations.
  • Accuracy is closeness to the true value, while precision is the closeness of repeated measurements.
  • Significant digits dictate the confidence level in a computed value and determine the stopping criterion εs\varepsilon_s.
  • True error compares against the exact value, while approximate error compares consecutive iterative estimates.
  • Relative error normalizes the error against the magnitude of the value.
  • Round-off errors arise from the finite precision of floating-point representation and are bounded by the machine epsilon. They occur due to chopping or rounding infinite digits.
  • Subtractive cancellation occurs when subtracting nearly equal numbers, causing a catastrophic loss of significant digits.
  • Error Propagation estimates how uncertainty in input variables affects the final result using partial derivatives.
  • Truncation errors occur when mathematical procedures are approximated.
  • The Taylor series is fundamental for evaluating truncation errors and formulating numerical approximations. The Lagrange form of the remainder bounds this error.
  • Total numerical error involves a trade-off: minimizing truncation error often requires more steps, which increases round-off error.
  • Big-O notation characterizes the truncation error and the rate of convergence of numerical methods.
  • Forward Error Analysis bounds the error in the final result, while Backward Error Analysis determines the exact input for which the computed result is exactly correct.
  • Conditioning reflects a problem's sensitivity to input changes, while stability is an algorithm's ability to limit error growth.
  • Other errors like blunders, formulation errors, and data uncertainty must also be minimized for reliable results.