Matrix Mathematics

Matrix Mathematics: Theory, Facts, and Formulas (Second Edition)

Dennis S. Bernstein
Copyright Date: 2009
Pages: 1184
https://www.jstor.org/stable/j.ctt7t833
  • Cite this Item
  • Book Info
    Matrix Mathematics
    Book Description:

    When first published in 2005,Matrix Mathematicsquickly became the essential reference book for users of matrices in all branches of engineering, science, and applied mathematics. In this fully updated and expanded edition, the author brings together the latest results on matrix theory to make this the most complete, current, and easy-to-use book on matrices.

    Each chapter describes relevant background theory followed by specialized results. Hundreds of identities, inequalities, and matrix facts are stated clearly and rigorously with cross references, citations to the literature, and illuminating remarks. Beginning with preliminaries on sets, functions, and relations,Matrix Mathematicscovers all of the major topics in matrix theory, including matrix transformations; polynomial matrices; matrix decompositions; generalized inverses; Kronecker and Schur algebra; positive-semidefinite matrices; vector and matrix norms; the matrix exponential and stability theory; and linear systems and control theory. Also included are a detailed list of symbols, a summary of notation and conventions, an extensive bibliography and author index with page references, and an exhaustive subject index. This significantly expanded edition ofMatrix Mathematicsfeatures a wealth of new material on graphs, scalar identities and inequalities, alternative partial orderings, matrix pencils, finite groups, zeros of multivariable transfer functions, roots of polynomials, convex functions, and matrix norms.

    Covers hundreds of important and useful results on matrix theory, many never before available in any bookProvides a list of symbols and a summary of conventions for easy useIncludes an extensive collection of scalar identities and inequalitiesFeatures a detailed bibliography and author index with page referencesIncludes an exhaustive subject index with cross-referencing

    eISBN: 978-1-4008-3334-4
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-viii)
  2. Table of Contents
    (pp. ix-xiv)
  3. Preface to the Second Edition
    (pp. xv-xvi)
    Dennis S. Bernstein
  4. Preface to the First Edition
    (pp. xvii-xx)
    Dennis S. Bernstein
  5. Special Symbols
    (pp. xxi-xxxii)
  6. Conventions, Notation, and Terminology
    (pp. xxxiii-xlii)
  7. Chapter One Preliminaries
    (pp. 1-84)

    In this chapter we review some basic terminology and results concerning logic, sets, functions, and related concepts. This material is used throughout the book.

    Everystatementis either true or false, but not both. LetAandBbe statements. ThenegationofAis the statement (notA), thebothofAandBis the statement (AandB), and theeitherofAandBis the statement (AorB). The statement (AorB) does not contradict (AandB), that is, the word “or” is inclusive. Exclusive “or” is indicated by the phrase “but...

  8. Chapter Two Basic Matrix Properties
    (pp. 85-178)

    In this chapter we provide a detailed treatment of the basic properties of matrices such as range, null space, rank, and invertibility. We also consider properties of convex sets, cones, and subspaces.

    The set${\mathbb{F}^n}$consists ofvectors xof the form

    $x\; = \;\left[ {\begin{array}{*{20}{c}}{{x_{(1)}}} \\\vdots \\{{x_{(n)}}} \\\end{array} } \right],$(2.1.1)

    where${x_{(1)}}, \ldots ,{x_{(n)}} \in {\text{ }}\mathbb{F}$are thecomponentsofx,and$\mathbb{F}$represents either$\mathbb{R}$or$\mathbb{C}$. Hence, the elements of${\mathbb{F}^n}$arecolumn vectors. Since${\mathbb{F}^1} = \mathbb{F}$, it follows that every scalar is also a vector. If$x\; \in \;{\mathbb{R}^n}$and every component ofxis nonnegative, thenxisnonnegative, while, if every component ofxis positive, thenx...

  9. Chapter Three Matrix Classes and Transformations
    (pp. 179-252)

    This chapter presents definitions of various types of matrices as well as transformations for analyzing matrices.

    In this section we categorize various types of matrices based on their algebraic and structural properties.

    The following definition introduces various types of square matrices.

    Definition 3.1.1. For$A \in {\text{ }}{\mathbb{F}^{n \times {\text{ }}n}}$define the following types of matrices:

    i)Aisgroup invertibleif$R(A)\; = \;R({A^2})$.

    ii)Aisinvolutoryif${A^2}\; = \;I$.

    iii)Aisskew involutoryif${A^2}\; = - I$.

    iv)Aisidempotentif${A^2} = A$.

    v)Aisskew idempotentif${A^2}\; = - A$.

    vi)Aistripotentif${A^3}\; = \;A$.

    vii)Aisnilpotentif there exists$k\; \in \;\mathbb{P}$such that...

  10. Chapter Four Polynomial Matrices and Rational Transfer Functions
    (pp. 253-308)

    In this chapter we consider matrices whose entries are polynomials or rational functions. The decomposition of polynomial matrices in terms of the Smith form provides the foundation for developing canonical forms in Chapter 5. In this chapter we also present some basic properties of eigenvalues and eigenvectors as well as the minimal and characteristic polynomials of a square matrix. Finally, we consider the extension of the Smith form to the Smith-McMillan form for rational transfer functions.

    A function$p:\mathbb{C} \mapsto \mathbb{C}$of the form

    $p(s)\; = \;{\beta _k}{s^k} + \;{\beta _{k - 1}}{s^{k - 1}} + \; \cdots \; + \;{\beta _1}s + \;{\beta _0}$, (4.1.1)

    where$k\; \in \;\mathbb{N}$and${\beta _0},\; \ldots ,\;{\beta _k}\; \in \;\mathbb{F}$, is apolynomial. The set of polynomials is denoted by$\mathbb{F}[s]$. If...

  11. Chapter Five Matrix Decompositions
    (pp. 309-396)

    In this chapter we present several matrix decompositions, namely, the Smith, multicompanion, elementary multicompanion, hypercompanion, Jordan, Schur, and singular value decompositions.

    The first decomposition involves rectangular matrices subject to a biequivalence transformation. This result is the specialization of the Smith decomposition given by Theorem 4.3.2 to constant matrices.

    Theorem 5.1.1. Let$A\: \in \:{{\text{F}}^{n\, \times \,m}}$and$r \triangleq \;{\text{rank}}\;A$. Then, there exist nonsingular matrices${S_1}\: \in \:{{\text{F}}^{n\, \times \,n}}$and${S_2}\: \in \:{{\text{F}}^{m\, \times \,m}}$such that

    $A = {S_1}\left[ {\begin{array}{*{20}{c}} {{I_r}} \:\: {{0_{r\, \times \,(m\, - \,r)}}} \\ {{0_{(n\, - \,r)\, \times \,r}}} \:\: {{0_{(n\, - \,r)\backslash : \times \backslash :(m\, - \,r)}}} \\ \end{array} } \right]{S_2}$. (5.1.1)

    Corollary 5.1.2. Let$A,\;B\; \in \;{\mathbb{F}^{n\, \times \,m}}$. Then,AandBare biequivalent if and only ifAandBhave the same Smith form.

    Proposition 5.1.3. Let$A,\;B\; \in \;{\mathbb{F}^{n\, \times \,m}}$. Then, the following statements hold:

    i)Aand...

  12. Chapter Six Generalized Inverses
    (pp. 397-438)

    Generalized inverses provide a useful extension of the matrix inverse to singular matrices and to rectangular matrices that are neither left nor right invertible.

    Let$A\; \in \;{\mathbb{F}^{n\, \times \,m}}$. IfAis nonzero, then, by the singular value decomposition Theorem 5.6.3, there exist orthogonal matrices${S_1}\; \in \;{\mathbb{F}^{n\, \times \,n}}$and${S_2}\; \in \;{\mathbb{F}^{m\, \times \,m}}$such that

    $A\; = \;{S_1}\;\left[ {\begin{array}{*{20}{c}} B \:\:\:\:\:\: {{0_{r\, \times \,(m\, - \,r)}}} \\ {{0_{(n\, - \,r)\, \times \,r}}} \:\: {{0_{(n\, - \,r)\, \times \,(m\, - \,r)}}} \\ \end{array} } \right]\,{S_2}$, (6.1.1)

    where$B \triangleq \;{\text{diag}}\,{\text{[}}{\sigma _1}(A),\; \ldots ,\;{\sigma _r}(A)]$,$r \triangleq \;{\text{rank}}\;A$, and${\sigma _1}(A)\; \geqslant \;{\sigma _2}(A)\; \geqslant \; \cdots \; \geqslant \;{\sigma _r}(A)\; > \;0$are the positive singular values ofA. In (6.1.1), some of the bordering zero matrices may be empty. Then, the (Moore-Penrose)generalized inverse${A^ + }$ofAis the$m \times n$matrix

    ${A^ + } \triangleq \;S_2^*\left[ {\begin{array}{*{20}{c}} {{B^{ - 1}}} \:\:\:\:\:\: {{0_{r\, \times \,(n\, - \,r)}}} \\ {{0_{(m\, - \,r)\, \times \,r}}} \:\: {{0_{(m\, - \,r)\, \times \,(n\, - \,r)}}} \\ \end{array} } \right]S_1^*$, (6.1.2)

    If$A\; = \;{0_{n\, \times \,m}}$, then${A^ + } \triangleq {0_{m\, \times \,n}}$, while, if$m = n$and det$A \ne 0$, then${A^ + } = {A^{ - 1}}$. In general,...

  13. Chapter Seven Kronecker and Schur Algebra
    (pp. 439-458)

    In this chapter we introduce Kronecker matrix algebra, which is useful for solving linear matrix equations.

    For$A\; \in \;{\mathbb{F}^{n\, \times \,m}}$define thevecoperator as

    ${\text{vec}}\;A \triangleq \;\left[ {\begin{array}{*{20}{c}} {{\text{co}}{{\text{l}}_1}(A)} \\\vdots \\{{\text{co}}{{\text{l}}_m}(A)} \\\end{array} } \right]\; \in \;{\mathbb{F}^{n\,m}}$, (7.1.1)

    which is the column vector of size$nm\; \times \;1$obtained by stacking the columns ofA. We recoverAfrom vecAby writing

    $A\; = \;{\text{ve}}{{\text{c}}^{ - 1}}({\text{vec}}\,A)$. (7.1.2)

    Proposition 7.1.1 Let$A\; \in \;{\mathbb{F}^{n\, \times \,m}}$and$B\; \in \;{\mathbb{F}^{m\, \times \,n}}$. Then,

    ${\text{tr}}\,AB\; = \;{\left( {{\text{vec}}\,{A^{\text{T}}}} \right)^{\text{T}}}{\text{vec}}\;B\; = \;{\left( {{\text{vec}}\;{B^{\text{T}}}} \right)^{\text{T}}}{\text{vec}}\,A$. (7.1.3)

    Proof. Note that

    ${\text{tr}}\;AB\; = \;\sum\limits_{i = 1}^n {{\text{ro}}{{\text{w}}_i}(A)\,{\text{co}}{{\text{l}}_i}(B)} $

    Next, we introduce the Kronecker product.

    Definition 7.1.2. Let$A\; \in \;{\mathbb{F}^{n\, \times \,m}}$and$B\; \in \;{\mathbb{F}^{l\, \times \,k}}$. Then, theKronecker product$A \otimes B\; \in \;{\mathbb{F}^{nl\, \times \,mk}}$ofAandBis the partitioned matrix

    $A \otimes B \triangleq {\text{ }}\left[ {\begin{array}{*{20}{c}} {{A_{(1,\,1)}}B\:\:\:\:{A_{(1,\,2)}}B\:\:\:\: \cdots\:\:\:\: {A_{(1,\,m)}}B} \\ { \vdots\:\:\:\:\:\:\:\: \vdots\:\:\:\:\:\:\:\: \cdot \vdots \cdot\:\:\:\:\:\:\:\: \vdots } \\ {{A_{(n,\,1)}}B\:\:\:\:{A_{(n,\,2)}}B\:\:\:\: \cdots \:\:\:\:{A_{(n,\,m)}}B} \\ \end{array} } \right]$. (7.1.4)

    Unlike matrix multiplication, the Kronecker product$A \otimes B$does not entail...

  14. Chapter Eight Positive-Semidefinite Matrices
    (pp. 459-596)

    In this chapter we focus on positive-semidefinite and positive-definite matrices. These matrices arise in a variety of applications, such as covariance analysis in signal processing and controllability analysis in linear system theory, and they have many special properties.

    Let$A\; \in \;{\mathbb{F}^{n\, \times \,n}}$be a Hermitian matrix. As shown in Corollary 5.4.5,Ais unitarily similar to a real diagonal matrix whose diagonal entries are the eigenvalues ofA. We denote these eigenvalues by${\lambda _1},\, \ldots ,\,{\lambda _n}$or, for clarity, by${\lambda _1}(A),\; \ldots ,\;{\lambda _n}(A)$. As in Chapter 4, we employ the convention

    ${\lambda _1} \geqslant \;{\lambda _2}\; \geqslant \; \cdots \; \geqslant \;{\lambda _n}$, (8.1.1)

    and, for convenience, we define

    ${\lambda _{{\text{max}}}}(A)\; \triangleq \;{\lambda _1},\quad {\lambda _{{\text{min}}}}(A)\; \triangleq \;{\lambda _n}.$(8.1.2)

    Then,Ais positive semidefinite if...

  15. Chapter Nine Norms
    (pp. 597-680)

    Norms are used to quantify vectors and matrices, and they play a basic role in convergence analysis. This chapter introduces vector and matrix norms and their properties.

    For many applications it is useful to have a scalar measure of the magnitude of a vectorxor a matrixA.Normsprovide such measures.

    Definition 9.1.1. Anorm$\left\| {\, \cdot \,} \right\|$on${\mathbb{F}^n}$is a function$\left\| {\, \cdot \,} \right\|:{\mathbb{F}^n} \mapsto \;[0,\;\infty )$that satisfies the following conditions:

    i)$\left\| x \right\|\; \geqslant \;0$for all$x\; \in \;{\mathbb{F}^n}$.

    ii)$\left\| x \right\|\; = \;0$if and only if$x\; = \;0$.

    iii)$\left\| {\alpha x} \right\|\; = \;|{\kern 1pt} \alpha {\kern 1pt} |\;\left\| x \right\|$for all$\alpha \; \in \;\mathbb{F}$and$x\; \in \;{\mathbb{F}^n}$.

    iv)$\left\| {x\; + \;y} \right\|\; \leqslant \;\left\| x \right\|\, + \;\left\| y \right\|$for all$x,\;y\; \in \;{\mathbb{F}^n}$.

    Conditioniv) is thetriangle inequality.

    A...

  16. Chapter Ten Functions of Matrices and Their Derivatives
    (pp. 681-706)

    The norms discussed in Chapter 9 provide the foundation for the development in this chapter of some basic results in topology and analysis.

    Let$\left\| {\, \cdot \,} \right\|$be a norm on${\mathbb{F}^n}$, let$x\; \in \;{\mathbb{F}^n}$, and let$\varepsilon \; > \;0$. Then, define theopen ball of radius${\text{\varepsilon }}$centered at xby

    ${\mathbb{B}_\varepsilon }(x) \triangleq \;\{ y\; \in \;{\mathbb{F}^n}\,:\;\;\left\| {x\; - \;y} \right\|\; < \;\varepsilon \} $(10.1.1)

    and thesphere of radius${\text{\varepsilon }}$centered at xby

    ${\mathbb{S}_\varepsilon }(x) \triangleq \;\{ y\; \in \;{\mathbb{F}^n}\,:\;\;\left\| {x\; - \;y} \right\|\; = \;\varepsilon \}$. (10.1.2)

    Definition 10.1.1. Let${\text{S}}\; \subseteq \;{\mathbb{F}^n}$. The vector$x\; \in \;S$is aninterior pointof${\text{S}}$if there exists$\varepsilon \; > \;0$such that${\mathbb{B}_\varepsilon }(x)\; \subseteq \;{\text{S}}$. Theinteriorof${\text{S}}$is the set

    $\operatorname{int} \;{\text{S}} \triangleq \;{\text{\{ }}x\; \in \;{\text{S}}\,:\;\;x\;{\text{is}}\;{\text{an interior point of S\} }}{\text{.}}$(10.1.3)

    Finally,${\text{S}}$isopenif every element of${\text{S}}$is...

  17. Chapter Eleven The Matrix Exponential and Stability Theory
    (pp. 707-794)

    The matrix exponential function is fundamental to the study of linear ordinary differential equations. This chapter focuses on the properties of the matrix exponential as well as on stability theory.

    The scalar initial value problem

    $\dot x(t)\: = \:ax(t),$(11.1.1)

    $x(0)\: = \:{x_0},$(11.1.2)

    where$t\: \in \:[0,\:\infty )$and$a,x(t)\: \in \:\mathbb{R}$, has the solution

    $x(t)\: = \:{e^{at}}{x_0}$, (11.1.3)

    where$t\; \in \;[0,\;\infty )$. We are interested in systems of linear differential equations of the form

    $\dot x(t)\; = \;Ax(t)$, (11.1.4)

    $x(0)\; = \;{x_0},$(11.1.5)

    where$t\; \in \;[0,\;\infty ),\;x(t)\; \in \;{\mathbb{R}^n}$, and$A\; \in \;{\mathbb{R}^{n\, \times \,n}}$. Here$\dot x(t)$denotes$\tfrac{{{\text{d}}x(t)}} {{{\text{d}}t}}$, where the derivative is one sided for$t\; = \;0$and two sided for$t\; > \;0$. The solution of (11.1.4), (11.1.5) is given by

    $x(t)\; = \;{e^{tA}}{x_0},$(11.1.6)

    where$t\; \in \;[0,\;\infty )$and...

  18. Chapter Twelve Linear Systems and Control Theory
    (pp. 795-880)

    This chapter considers linear state space systems with inputs and outputs. These systems are considered in both the time domain and frequency (Laplace) domain. Some basic results in control theory are also presented.

    Let$A\; \in \;{\mathbb{R}^{n\, \times \,n}}$and$B\; \in \;{\mathbb{R}^{n\, \times \,m}}$, and, for$t\; \geqslant \;{t_0}$, consider thestate equation

    $\dot x(t)\; = \;Ax(t)\; + \;Bu(t),$(12.1.1)

    with theinitial condition

    $x({t_0})\; = \;{x_0}$. (12.1.2)

    In (12.1.1),$x:\;\,[0,\;\infty )\; \mapsto \;{\mathbb{R}^n}$is thestate, and$u:\;\,[0,\;\infty )\; \mapsto \;{\mathbb{R}^m}$is theinput. The functionxis called thesolutionof (12.1.1).

    The following result give the solution of (12.1.1 known as thevariation of constants formula.

    Proposition 12.1.1. For$t\; \geqslant \;{t_0}$the state$x(t)$of the dynamical equation (12.1.1) with...

  19. Bibliography
    (pp. 881-966)
  20. Author Index
    (pp. 967-978)
  21. Index
    (pp. 979-1139)