Vectors and Matrices

Vectors and Matrices

CYRUS COLTON MACDUFFEE
Volume: 7
Copyright Date: 1943
Edition: 1
Pages: 216
https://www.jstor.org/stable/10.4169/j.ctt5hh912
  • Cite this Item
  • Book Info
    Vectors and Matrices
    Book Description:

    The MAA is pleased to re-issue the early Carus Mathematical Monographs in ebook format. Readers with an interest in the history of the undergraduate curriculum or the history of a particular field will be rewarded by study of these very clear and approachable little volumes. In 1943 a course in linear algebra did not yet exist as a standard part of the undergraduate curriculum. It would be another twenty years before that would become common. It is, however, easy to identify the defining features of that course in this volume. Start with the idea of solving linear systems; change the point of view to that of transformations on vector spaces; recognize similarity as an essential classifying principle; and catalogue the canonical forms (Jordan normal form) of the transformations. All of this is here but with a decided, old-fashioned, algebraic accent—there is only one figure in the entire text.

    eISBN: 978-1-61444-007-9
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-iv)
  2. INTRODUCTION
    (pp. v-viii)
    Cyrus Colton MacDuffee

    The theory of matrices had its origin in the theory of determinants, and the latter had its origin in the theory of systems of equations. From Vandermonde and Laplace to Cayley, determinants were cultivated in a purely formal manner. The early algebraists never successfully explained what a determinant was, and indeed they were not interested in exact definitions.

    It was Cayley who seems first to have noticed that “the idea of matrix precedes that of determinant.” More explicitly, we can say that the relation of determinant to matrix is that of the absolute value of a complex number to the...

  3. Table of Contents
    (pp. ix-2)
  4. CHAPTER I SYSTEMS OF LINEAR EQUATION
    (pp. 3-14)

    Asolutionof the equation

    $2x + 3y - 6 = 0$is a pair of numbers$\left( {{x_1},{y_1}} \right)$such that$2{x_1} + 3{y_1} - 6 = 0$.

    There are infinitely many such solutions. A solution of the system of equations

    (1 )$2x + 3y - 6 = 0$,

    $4x - 3y - 6 = 0$is a pair of numbers$\left( {{x_1},{y_1}} \right)$which is a solution of both equations. There exists just one such solution, namely (2,2/3).

    If we picture (x, y) as a point on the Cartesian plane, the infinitely many solutions of the equation

    $2x + 3y - 6 = 0$

    are the points of a straight line${l_1}$, known as thegraphof the equation. The second equation

    $4x - 3y - 6 = 0$

    also has a graph${l_2}$which is a straight line. The point of intersection of the two...

  5. CHAPTER II VECTOR SPACES
    (pp. 15-46)

    6. Vectors in ordinary space. We shall assume that the reader is familiar with the use of vectors in ordinary Euclidean space to represent physical quantities, such as forces, velocities, or accelerations, which have both magnitude and direction. Let there be a set of three coordinate axes not all in the same plane which we shall call thex-, y-,andz-axes. Let${ \in _1},{ \in _2},{ \in _3}$be three line segments (basic vectors), each of length$ \ne 0$, emanating from the origin,${ \in _1}$on thex-axis,${ \in _2}$on they-axis, and${ \in _3}$on thez-axis. Addition and scalar multiplication of line segments which lie...

  6. CHAPTER III DETERMINANTS
    (pp. 47-66)

    Consider the set of all matrices of the form

    $A = \left( {\begin{array}{*{20}{c}} x & y \\ { - y} & x \\ \end{array}} \right),B = \left( {\begin{array}{*{20}{c}} u & v \\ { - v} & u \\ \end{array}} \right)$, . . .

    where$x, y, u, v, . . .$are real numbers. Thesumandproduct,

    $A + B = \left( {\begin{array}{*{20}{c}} {x + u} & {y + v} \\ { - (y + v)} & {x + u} \\ \end{array}} \right)$,

    $AB = BA = \left( {\begin{array}{*{20}{c}} {xu - yv} & {xv + yu} \\ { - (xv + yu)} & {xu + yv} \\ \end{array}} \right)$.

    are both of the same form asAandB—that is, they have their diagonal elements equal to each other and the other two elements negatives of each other. In particular the zero matrix and the identity matrix,

    $O = \left( {\begin{array}{*{20}{c}} 0 & 0 \\ 0 & 0 \\ \end{array}} \right),I = \left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & 1 \\ \end{array}} \right)$,

    are of this form. Thus the set of all matrices of this form is closed under addition and multiplication, and constitutes amatric algebra with unit element.

    This algebra will be a field* if every matrixAexceptOhas an inverse. The product...

  7. CHAPTER IV MATRIC POLYNOMIALS
    (pp. 67-85)

    24. Ring with unit element. In modern abstract algebra it is customary to define aringas a mathematical system consisting of elements and two operations called addition and multiplication, relative to which the system is closed, subject to the following postulates.

    1. The system is a commutative group relative to addition, the identity element being denoted by 0, and the inverse ofaby —a.

    2. Multiplication is associative, i.e.,

    $(a \times b) \times c{\rm{ }} = {\rm{ }}a \times (b \times c).$

    3. Multiplication is distributive with respect to addition, i.e.,

    $a \times (b + c){\rm{ }} = {\rm{ }}a \times b + a \times c$...

  8. CHAPTER V UNION AND INTERSECTION
    (pp. 86-111)

    Two vectors

    $\alpha = \left( {{a_1},{a_2}, \cdots ,{a_n}} \right)$,$\phi = \left( {{v_1},{v_2}, \cdots ,{v_n}} \right)$

    of the total vector spaceSof dimensionnare said to beorthogonalif their inner product vanishes—that is, if$\alpha .\phi = {a_1}{v_1} + {a_2}{v_2} + \cdots + {a_n}{v_n} = 0$.

    If$\phi $is orthogonal to each of the vectors$\alpha $and$\beta $, then

    $({k_1}\alpha + {k_2}\beta ) \cdot \phi = {k_1}\alpha .\phi + {k_2}\beta .\phi = 0$,

    so that$\phi $is orthogonal to every vector of the form${k_1}\alpha + {k_2}\beta $. In general, if$\phi $is orthogonal to any number of vector$\alpha ,\beta , \cdots ,$it is orthogonal to every vector of the linear system (§7)${S_1}$, and write

    ${S_1}.\phi = 0$.

    Now consider the set${{S'}_1}$of all vectors$\phi ,\chi , \cdots $which are orthogonal to the linear system${S_1}$. Then

    ${S_1}.({k_1}\phi + {k_2}\chi + \cdots ) = {k_1}{S_1}.\phi + {k_2}{S_1}.\chi + \cdots = 0$

    so...

  9. CHAPTER VI THE RATIONAL CANONICAL FORM
    (pp. 112-136)

    41. Similar matrices. Two square matricesAandA'are calledsimilarif there exists a non-singular matrixPsuch that

    $A' = {P^{ - 1}}AP$

    We may use the notation$A' = A$to denote similarity.

    Since$A' = {P^{ - 1}}AP$implies$A' = {P^{ - 1}}AP$, and therefore$A = {\left( {{P^{ - 1}}} \right)^{ - 1}}A'\left( {{P^{ - 1}}} \right)$, it is clear that$A' = A$implies$A = A'$. Thus similarity is symmetric. Since$A = {I^{ - 1}}AI$, similarity is reflexive. Moreover, let

    $A' = {P^{ - 1}}AP$,$A'' = {Q^{ - 1}}A'Q$

    where bothPandQare, of course, non-singular. Then

    $A'' = {Q^{ - 1}}({P^{ - 1}}AP)Q = {(PQ)^{ - 1}}A(PQ)$

    Hence$A' = A$and$A'' = A'$imply$A'' = A$so that similarity is transitive. Thus similarity of matrices is a type of equals relation, or equivalence relation if you prefer. It is quite to...

  10. CHAPTER VII ELEMENTARY DIVISORS
    (pp. 137-159)

    49. Equivalence of matrices. Consider the set of all polynomials inxwith coefficients in a fieldF.This set is called thepolynomial domainofF,and is denoted by$F\left[ x \right]$.

    The domain$F\left[ x \right]$has many of the properties of the set of rational integers. It is closed under addition, subtraction and multiplication, but not under division. Those polynomials which are divisors of 1 are calledunits,and it is clear that the units of$F\left[ x \right]$are the polynomials of degree zero—that is, the numbers$ \ne 0$ofF.

    Those polynomials which are neither 0 nor units fall into...

  11. CHAPTER VIII ORTHOGONAL TRANSFORMATIONS
    (pp. 160-174)

    Consider the ordinary three-dimensional euclidean space with two setsx, y, zand$x',y',z'$of orthogonal axes having the same origin. If a point has the coordinates (a, b, c) relative to the first set of axes, and the coordinates ($a',b',c'$) relative to the second set, it is true that

    $\cos \left( {x'x} \right)$,

    $b' = a\cos \left( {y'x} \right) + b\cos \left( {y'y} \right) + c\cos \left( {y'z} \right)$,

    $c' = a\cos \left( {z'x} \right) + b\cos \left( {z'y} \right) + c\cos \left( {z'z} \right)$

    where$\cos \left( {x'x} \right)$is the cosine of the angle between thex-axis and thex'-axis, etc.

    The matrix

    $\left( {\begin{array}{*{20}{c}} {\cos \left( {x'x} \right)} & {\cos \left( {x'y} \right)} & {\cos \left( {x'z} \right)} \\ {\cos \left( {y'x} \right)} & {\cos \left( {y'y} \right)} & {\cos \left( {y'z} \right)} \\ {\cos \left( {z'x} \right)} & {\cos \left( {z'y} \right)} & {\cos \left( {z'z} \right)} \\ \end{array} } \right)$

    has some distinctive properties. The elements of each row are the direction cosines of one of the new coordinate axes relative to the old axes so that the sum of the squares of the elements of each row is 1. Since the axes

    has some distinctive properties. The elements of each row are the direction cosines of one of the new coordinate axes relative to the old axes so that the sum of...

  12. CHAPTER IX ENDOMORPHISMS
    (pp. 175-188)

    62. Groups with operators. In this concluding chapter we shall treat vectors and matrices from a more abstract point of view and attempt to give the reader an insight into what is at the moment the popular mode of approach to matric theory.

    The simplest important mathematical system is thegroup.A group consists of elements and a well-defined binary operation with respect to which the system is closed. That is, every ordered pair$\alpha ,\beta $of equal or distinct elements of the system determine a unique element$\gamma $of the system. We shall choose to denote the symbol of the...

  13. BIBLIOGRAPHY
    (pp. 189-190)
  14. INDEX OF TERMS
    (pp. 191-192)
  15. PROBLEMS
    (pp. 193-203)