Numer . Math. 28, 211 - 219 (1977) НиПЮПоСПб
Mathematk
© by Springer-Verlag 1977
Towards a Formal Definition of Numerical Stability
Lieuwe Sytse de Jong
Computing Centre, University of Technology, P.O. Box 513, Eindhoven 4501, The Netherlands
Summary . A formal definition of numerical stability is proposed. Techniques for proving numerical instability are demonstrated.
1 . Introduction
Since the development of digital computers numerous algorithms have been proposed for solving all kinds of problems. It is well known that, if the computations are performed with finite precision, some algorithms are satisfactory for all inputs if only the machine precision is great enough, but that other algorithms are unsatisfactory for some inputs even if the machine precision is drastically increased. This phenomenon is described as numerical instability. It has been (and is) one of the goals of a numerical analyst to propose algorithms that are numerically stable. Forward, backward and mixed error analysis are recognized techniques for proving numerical stability. More and more, however, numerical analysts study algorithms that originated in other parts of mathematics and it may be necessary that a convincing proof of the instability of an algorithm is given before a numerically stable algorithm can be proposed [1]. The proof that an algorithm is numerically unstable is often more complicated than the proof that an algorithm is numerically stable. This is the main reason for the investigations of which this article reports.
Our formal definition of numerical stability is based upon the intuitive concept of numerical stability as found in J. Stoers "Einführung in die Numerische Mathematik I" : an algorithm is numerically stable on some input set if the effect of round-off errors in the computations is comparable to the effect of round-off errors in the input data and the output data, independent of the particular input element.
2 . Definition of Numerical Stability
We consider digital computers with floating point arithmetic and relative machine precision ri>0. Given any machine we suppose that floating point operations are