Linear Algebra Problem Book

Linear Algebra Problem Book

Paul R. Halmos
Volume: 16
Copyright Date: 1995
Edition: 1
Pages: 348
  • Cite this Item
  • Book Info
    Linear Algebra Problem Book
    Book Description:

    Linear Algebra Problem Book can be either the main course or the dessert for someone who needs linear algebra—and nowadays that means every user of mathematics. It can be used as the basis of either an official course or a program of private study. If used as a course, the book can stand by itself, or if so desired, it can be stirred in with a standard linear algebra course as the seasoning that provides the interest, the challenge, and the motivation that is needed by experienced scholars as much as by beginning students. The best way to learn is to do, and the purpose of this book is to get the reader to DO linear algebra. The approach is Socratic: first ask a question, then give a hint (if necessary), then, finally, for security and completeness, provide the detailed answer.

    eISBN: 978-1-61444-212-7
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-vi)
    (pp. vii-viii)
  3. Table of Contents
    (pp. ix-xiv)
    (pp. 1-16)

    Is it obvious that

    63 + 48 = 27 + 84?

    It is a true and thoroughly uninteresting mathematical statement that can be verified in a few seconds—but is it obvious? If calling it obvious means that the reason for its truth is clearly understood, without even a single second’s verification, then most people would probably say no.

    (27 + 36) + 48 = 27 + (36 + 48)

    —is that obvious? Yes it is, for most people; the instinctive (and correct) reaction is that the way the terms of a sum are bunched together cannot affect the...

    (pp. 17-38)

    Real numbers can be added, and so can pairs of real numbers. If R² is the set of all ordered pairs (α, β) of real numbers, then it is natural to define the sum of two elements of R² by writing

    $(\alpha ,\beta ) + (\gamma ,\delta ) = (\alpha + \gamma ,\beta + \delta )$

    and the result is that R² becomes an abelian group. There is also a kind of partial multiplication that makes sense and is useful, namely the process of multiplying an element of R² by a real number and thus getting another element of R²:

    $\alpha (\beta ,\gamma ) = (\alpha \beta ,\alpha \gamma )$

    The end result of these comments is a structure consisting of three parts: an...

    (pp. 39-50)

    The most useful questions about total sets, and, in particular, about bases, are not so much how to make them, but how to change them. Which vectors can be used to replace some element of a prescribed total set and have it remain total? Which sets of vectors can be used to replace some subset of a prescribed total set and have it remain total? What restriction is imposed by the relation between the prescribed set and the prescribed total set?

    Problem 33.Under what conditions on a total setTof a vector spaceVand a finite subset...

    (pp. 51-84)

    Here is where the action starts. Till now the vectors in a vector space just sat there; the action begins when they move, when they change into other vectors. A typical example of a change can be seen in the vector space P5(all polynomials of degree less than or equal to 5): replace each polynomialpby its derivativeDp.

    What is visible here? If${p_1}(x) = 3x$and${p_2}(x) = 5{x^2}$, then

    ${D_{{p_1}}}(x) = 3$and${D_{{p_2}}}(x) = 10x$;

    if moreoversis the sum${p_1} + {p_2}$,


    $Ds(x) = 3 + 10x$

    This simple property of differentiation is from the present point of view its most important one: the derivative...

    (pp. 85-96)

    The most useful functions on vector spaces are the linear functionals. Recall their definition (Problem 55): a linear functional is a scalar-valued function ξ such that

    $\xi (x + y) = \xi (x) + \xi (y)$


    $\xi (\alpha x) + \alpha \xi (x)$

    wheneverxandyare vectors in V and α is a scalar.

    Example on Rn:$\xi ({x_1},{x_2},...,{x_n}) = 3{x_1}$.

    Example on R3:$\xi (x,y,z) = x + 2y + 3z$.

    Examples on the space P of polynomials:

    $\xi (p) = \int\limits_{ - 1}^{ + 1} {p(t)dt} $, or$\int\limits_{ - 2}^{ + 1} {{t^2}p(t)dt} $,

    or$\int\limits_0^9 {{t^2}p({t^2})dt} $, or${\left. {\frac{{{d^2}p}}{{d{t^2}}}} \right|_{t = 1}}$.

    The most trivial linear functional is also the most important one, namely 0. That is: if ξ is defined by

    $\xi (x) = 0$

    for allx, then ξ is a linear functional. Except for this uninteresting case, every...

    (pp. 97-106)

    If V is ann-dimensional vector space, with a prescribed ordered basis$\{ {x_1},...,{x_n}\} $, then each vectorxdetermines an orderedn-tuple of scalars. This is an elementary fact by now: if the expansion ofxin terms of the${x_j}$’s is

    $x = {\alpha _1}{x_1} + \cdot \cdot \cdot + {\alpha _n}{x_n}$,

    then the orderedn-tuple dertermined byxis just then-tuple

    $({\alpha _1},...,{\alpha _n})$

    of coefficients. The game can, of course, be played in the other direction: once a basis is fixed, each orderedn-tuple of scalars determines a vector, namely the vector whose sequence of coefficients it is.

    Now change the rules again, or, rather, play the already changed...

    (pp. 107-128)

    A large vector space (one of large dimension, that is) is a complicated object, and linear transformations on it are even more complicated. In the study of a linear transformation on a large space it often helps to concentrate attention on the way the transformation acts on small subspaces. The phrase “a linear transformation acting on a subspace” is usually interpreted to mean that the subspace is invariant under the transformation (in the language of Problem 70), and a “small” subspace is one of dimension 1 (surely the smallest that a non-trivial subspace can be). In view of these comments,...

    (pp. 129-148)

    Which of these vectors in R³ is larger: (2, 3, 5) or (3, 4, 4)? Does the question make sense? The only sizes, the only numbers, that have been considered so far are dimensions. Since (2, 3, 5) belongs to R³ and (1, 1, 1, 1) belongs to R⁴, the latter is in some sense larger, but it is a weak sense and not a useful one. The time has come to look at the classical and useful way of “measuring” vectors.

    The central concept is that of an inner product in a real or complex vector space. That is,...

    (pp. 149-168)

    The three most obvious pleasant relations that a linear transformation on an inner product space can have to its adjoint are that they are equal (Hermitian), or that one is the negative of the other (skew), or that one is the inverse of the other (not yet discussed). The word that describes the last of these possibilities is unitary: that’s what a linear transformationUis called in case it is invertible and${U^{ - 1}} = {U^*}$. The definition can be expressed in a “less prejudiced” way as$U*U = 1$—less prejudiced in the sense that it assumes less—but it is not clear...

  13. HINTS
    (pp. 169-184)
    (pp. 185-333)