Matrices, Moments and Quadrature with Applications

Matrices, Moments and Quadrature with Applications

Gene H. Golub
Gérard Meurant
Copyright Date: 2010
Pages: 376
https://www.jstor.org/stable/j.ctt7tbvs
  • Cite this Item
  • Book Info
    Matrices, Moments and Quadrature with Applications
    Book Description:

    This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part.

    Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization.

    This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.

    eISBN: 978-1-4008-3388-7
    Subjects: Mathematics, Technology

Table of Contents

  1. Front Matter
    (pp. i-vi)
  2. Table of Contents
    (pp. vii-x)
  3. Preface
    (pp. xi-xii)
    Gérard Meurant
  4. PART 1. THEORY

    • Chapter One Introduction
      (pp. 3-7)

      The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and conjugate gradient algorithms. Even though we recall the mathematical basis of the algorithms, this book is computationally oriented. The main goal is to obtain efficient numerical methods to estimate or in some cases to bound quantities likeI[f ] = uTf(A)vwhereuandvare given vectors,Ais a symmetric nonsingular matrix andfis a smooth function. The main idea developed in this book is to write I[f ] as a Riemann...

    • Chapter Two Orthogonal Polynomials
      (pp. 8-23)

      In this chapter, we briefly recall the properties of orthogonal polynomials which will be needed in the next chapters. We are mainly interested in polynomials of a real variable defined in an interval of the real line. For more details, see the book by Szegö [323] or the book by Chihara [64], and also the paper [65] for theoretical results on classical orthogonal polynomials and the nice book by Gautschi [131] for the computational aspects.

      We will define orthogonal polynomials in either a finite or an infinite interval [a,b] of the real line. We first have to define orthogonality. For...

    • Chapter Three Properties of Tridiagonal Matrices
      (pp. 24-38)

      We have seen that the tridiagonal Jacobi matrices are closely linked to orthogonal polynomials since they describe the three-term recurrence satisfied by these polynomials. We will see in chapter 4 that they are also key ingredients in the Lanczos and conjugate gradient algorithms. In this chapter we summarize some properties of tridiagonal matrices that will be useful in the next chapters.

      Let us consider a nonsymmetric tridiagonal matrix of orderk, with real coefficients

      ${T_k}\; = \;\left( {\begin{array}{*{20}{c}} {{\alpha _1}} & {{\omega _1}} & {} & {} & {} \\ {{\beta _1}} & {{\alpha _2}} & {{\omega _2}} & {} & {} \\ {} & \ddots & \ddots & \ddots & {} \\ {} & {} & {{\beta _{k - 2}}} & {{\alpha _{k - 1}}} & {{\omega _{k - 1}}} \\ {} & {} & {} & {{\beta _{k - 1}}} & {{\alpha _k}} \\ \end{array}} \right)$,

      and${\beta _i} \ne \;{\omega _i},\;i = 1,\; \ldots ,\;k - 1$. Then we have the following result.

      PROPOSITION 3.1Assume that the coefficients${\omega _j},\;j = 1,\; \ldots ,\;k\; - \;1$are different from zero and the products${\beta _j}{\omega _j}$are positive. Then,...

    • Chapter Four The Lanczos and Conjugate Gradient Algorithms
      (pp. 39-54)

      In this chapter we introduce the Lanczos algorithm for symmetric matrices as well as its block version and also the nonsymmetric Lanczos algorithm. These algorithms, which were devised to compute eigenvalues or to solve linear systems, are closely related to the moment problem and they will be used to compute quadrature formulas and to estimate bilinear forms. The Lanczos algorithm provides examples of orthonormal polynomials related to a discrete (usually unknown) measure. Moreover, we describe the Golub–Kahan bidiagonalization algorithms which are special versions of the Lanczos algorithm for matrices AATor ATA . This is useful when solving least...

    • Chapter Five Computation of the Jacobi Matrices
      (pp. 55-83)

      We have seen in chapter 2 that we know the coefficients of the three-term recurrence for the classical orthogonal polynomials. In other cases, we have to compute these coefficients from some other information sources. There are many circumstances in which one wants to determine the coefficients of the three-term recurrence (that is, the Jacobi matrix Jk) of a family of orthogonal polynomials given either the measure α, the moments μkdefined in equation (2.2) or the nodes and weights of a quadrature formula. We will see some examples of applications later in this book.

      A way to compute the coefficients of...

    • Chapter Six Gauss Quadrature
      (pp. 84-111)

      Given a measure$\alpha $on the interval$\left[ {a,b} \right]$and a functionf(such that its Riemann–Stieltjes integral and all the moments exist), a quadrature rule is a relation

      $\int_a^b {f(\lambda )} \;d\alpha \; = \;\sum\limits_{j = 1}^N {{w_j}f({t_j})\; + \;R[f]} $. (6.1)

      The sum in the right-hand side is the approximation of the integral on the lefthand side and$R\left[ f \right]$is the remainder, which is usually not known exactly. The real numbers${t_j}$are the nodes and${w_j}$the weights of the quadrature rule. The rule is said to be of exact degreedif$R\left[ p \right] = 0$for all polynomialspof degreedand there are some polynomialsqof degree...

    • Chapter Seven Bounds for Bilinear Forms ${u^T}f(A)\,v$
      (pp. 112-116)

      As we said in chapter 1, we are interested in computing bounds or approximations for bilinear forms

      a, (7.1)

      whereAis a symmetric square matrix of ordern,uandvare given vectors andfis a smooth (possibly${C^\infty }$) function on a given interval of the real line. For the relation of this problem to matrix moments, see Golub [140], [141]. There are many different areas of scientific computing where such estimates are required, for instance, solid state physics, physics problems leading to ill-posed linear systems, computing error bounds for iterative methods and so on.

      We will...

    • Chapter Eight Extensions to Nonsymmetric Matrices
      (pp. 117-121)

      When the matrixAis nonsymmetric (and not diagonalizable), there are several equivalent ways to definef(A); see Higham [189] and also Frommer and Simoncini [117]. Some particular functions can be defined from their power series. A general way of definingf(A)is through the Jordan canonical form ofAeven though this is not a practical means of computing the matrix function. Another definition uses a Hermite interpolation polynomial with interpolation conditions on the derivatives offat the eigenvalues ofA. Finally, when the functionfis analytic on and inside a closed contour$\Gamma $that encloses the...

    • Chapter Nine Solving Secular Equations
      (pp. 122-136)

      What are secular equations? The term “secular” comes from the latin “saecularis” which is related to “saeculum”, which means “century”. So secular refers to something that is done or happens every century. It is also used to refer to something that is several centuries old. It appeared in mathematics to denote equations related to the motion of planets and celestial mechanics. For instance, it appears in the title of a 1829 paper of A. L. Cauchy (1789–1857)“Sur l’ équation à l’ aide de laquelle on détermine les inégalités séculaires des mouvements des planètes”(Oeuvres Complètes (IIème Série), v 9...

  5. PART 2. APPLICATIONS

    • Chapter Ten Examples of Gauss Quadrature Rules
      (pp. 139-161)

      Until the 1960s, quadrature rules were computed by hand or using desk calculators and published in the form of tables (see, for instance, the list given in [320]) giving the nodes and weights for a given degree of approximation. It was a great improvement when some software appeared allowing the computation of nodes and weights. This can be done by brute force solving systems of nonlinear equations; see, for instance, the Fortran codes and the tables published in the book of Stroud and Secrest [320]. However, the method of choice today is to compute the nodes and weights using the...

    • Chapter Eleven Bounds and Estimates for Elements of Functions of Matrices
      (pp. 162-199)

      In this chapter we consider the computation of elements of a functionf(A)of a symmetric matrixA. If$A\; = \;Q\Lambda {Q^T}$is the spectral decomposition ofAwithQorthogonal andAdiagonal, then$f(A)\; = \;Q\,f(\Lambda ){Q^T}$. Diagonal elements can be estimated by considering

      ${[f(A)]_{i,i}}\; = \;{({e^i})^T}f(A){e^i}$,

      where${e^i}$is theith column of the identity matrix. Following chapter 7, we can apply the Lanczos algorithm to the matrixAwith a starting vector${e^i}$. This generates tridiagonal Jacobi matrices${J_k}$. The estimate of${[f(A)]_{i,\,i}}$given by the Gauss quadrature rule at iterationKis${({e^1})^T}f({J_k}){e^1}$, that is, the (1,1) entry of the matrix$f({J_k})$....

    • Chapter Twelve Estimates of Norms of Errors in the Conjugate Gradient Algorithm
      (pp. 200-226)

      In this chapter we study how the techniques for computing bounds of quadratic forms can be applied to the computation of bounds for norms of the error in iterative methods for solving linear systems. We are particularly interested in the conjugate gradient algorithm since we will see that it is closely related to Gauss quadrature.

      LetAbe a symmetric positive definite matrix of ordernand suppose that an approximate solution$\tilde x$of the linear system

      Ax = c,

      wherecis a given vector, has been computed by either a direct or an iterative method. The residualr...

    • Chapter Thirteen Least Squares Problems
      (pp. 227-255)

      In this chapter we are concerned with the application of the techniques we developed to estimate bilinear forms related to the solution of least squares problems. First we give a brief introduction to least squares problems. For more details see the book by Björck [30].

      Assume we have a data matrixAof dimension$m\; \times \;n$with$m\; \ge \;n$and a vectorcof observations and we want to solve the linear system$Ax \approx \;c$in a certain sense. Ifcis not in the range ofA, a common way to compute a solution is to solve a least squares (LS) problem,...

    • Chapter Fourteen Total Least Squares
      (pp. 256-279)

      In least squares (LS) we have only a perturbation of the right-hand side as in equation (13.3) whereas total least squares (TLS) considers perturbations of the vector of observationscand of the$m\; \times \;n$data matrixA. Given two nonsingular diagonal weighting matrices${W_L}$of ordermand${W_R}$of order$n + 1$, we consider the problem

      $\mathop {{\text{min}}}\limits_{E,\;r} \;\;\;{\left\| {{W_L}\,(E\;\,\;r)\,{W_R}} \right\|_F}$,

      subject to the constraint$(A\; + \;E)x\; = \;c\; + \;r$, which means finding the smallest perturbationsEandrsuch that$c\; + \;r$is in the range of$A\; + \;E$. The norm${\left\| {\, \cdot \,} \right\|_F}$is the Frobenius norm, which is the square root of the sum of the squares of all...

    • Chapter Fifteen Discrete Ill-Posed Problems
      (pp. 280-334)

      Problems are generally defined as ill-posed (as defined by J. Hadamard) when the solution is not unique or does not depend continuously on the data. More practically, problems are said to be ill-posed when a small change in the data may cause a large change in the solution. Strictly speaking, problems cannot be ill-posed in Hadamard’s sense in finite dimension, but problems whose solution is sensitive to perturbations of the data are called discrete ill-posed problems (DIP); see Hansen [178]. This is typically what may happen in the solution of least squares problems. Looking back at the solution of$Ax \approx \;c$...

  6. Bibliography
    (pp. 335-360)
  7. Index
    (pp. 361-363)