In 1952 NBS researchers published an algorithm for solving systems of linear equations which would only come to prominence two decades later as a standard method for solving very large sparse systems on supercomputers. It’s impact has been so great that in 1999 one magazine listed it as one of the Top 10 Algorithms of the Century.
The advent of electronic computers in the middle of the 20th century stimulated a flurry of activity in developing numerical algorithms that could be applied to computational problems much more difficult than those solved in the past, especially algorithms for the solution of linear systems and matrix eigenvalue problems. Such problems arise in nearly all application areas, such as modeling of fluid flows, reservoir engineering, mechanical engineering, semiconductor device analysis, nuclear reaction models, and electric circuit simulation. These matrices can be huge, with millions of unknown quantities to be determined.
At this time, there were two commonly used types of algorithms for solving linear systems. The first, like Gauss elimination, modified a tableau of matrix entries in a systematic way in order to compute the solution. These methods were finite, but required a rather large amount of computational effort with work growing as the cube of the number of unknowns. The second type of algorithm used “relaxation techniques” to develop a sequence of approximations converging to the solution. Although convergence was often slow, these algorithms could be terminated, often with a reasonably accurate solution estimate, whenever the human “computers” ran out of time. The ideal algorithm would be one that had finite termination but, if stopped early, would give a useful approximate solution.
Magnus Hestenes of the NBS Institute for Numerical Analysis and Eduard Stiefel, a visitor from ETH Zurich, succeeded in developing an algorithm with exactly these characteristics, the method of conjugate gradients [2]. The algorithm garnered considerable early attention but went into eclipse in the 1960s as naive implementations were unsuccessful on some of the ever-larger problems that were being attempted. In the early 1970s there was renewed attention to the algorithm, and since then it has been an intense topic of research. Today it is the standard algorithm for solving linear systems involving large, sparse, symmetric (or Hermitian) matrices.
Today the conjugate gradient method and its many variants are classified under the term Krylov Subspace Iteration, and this paper describes the first of these methods to solve linear systems. In 1999, Computing in Science and Engineering, a publication of the IEEE Computer Society and the American Institute of Physics, named Krylov Subspace Iteration as one of the Top 10 Algorithms of the Century [3], citing in particular the pioneering work of Hestenes and Stiefel, along with Cornelius Lanczos, another NBS/INA researcher.