Difference between revisions of "Numerical stability"

From Wiki @ Karl Jones dot com
Jump to: navigation, search
(See also)
Line 18: Line 18:
 
Conditions which might cause large deviation include:
 
Conditions which might cause large deviation include:
  
* Round-off errors  
+
* [[Round-off error|Round-off errors]]
 
* Small yet critical fluctuations in initial data
 
* Small yet critical fluctuations in initial data
  
Line 45: Line 45:
 
* [[Mathematics]]
 
* [[Mathematics]]
 
* [[Numerical analysis]]
 
* [[Numerical analysis]]
 +
* [[Round-off error]]
 
* [[Stability theory]]
 
* [[Stability theory]]
  

Revision as of 17:40, 29 February 2016

In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms.

The precise definition of stability depends on the context

  • Numerical linear algebra
  • Numerical algorithms for solving ordinary and partial differential equations by discrete approximation

Description

Numerical linear algebra

In numerical linear algebra the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues.

Numerical algorithms

In numerical algorithms for differential equations the principle concern is a large deviation of the computed answer from the exact solution.

Conditions which might cause large deviation include:

These effects are related and cumulative.

Some numerical algorithms may damp out the small fluctuations (errors) in the input data.

Others might magnify such errors.

Calculations that can be proven not to magnify approximation errors are called numerically stable: such calculations possess numerical stability.

Robustness

One of the common tasks of numerical analysis is to try to select algorithms do not produce a wildly different result for very small change in the input data.

Such algorithms are robust; they have the quality of robustness.

Instability

An opposite phenomenon is instability. Typically, an algorithm involves an approximate method, and in some cases one could prove that the algorithm would approach the right solution in some limit. Even in this case, there is no guarantee that it would converge to the correct solution, because the floating-point round-off or truncation errors can be magnified, instead of damped, causing the deviation from the exact solution to grow exponentially.

See also

External links