Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a socalled manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstractionillustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several wellknown optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The stateoftheart algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.
Optimization Algorithms on Matrix Manifoldsoffers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduatelevel textbook and will be of interest to applied mathematicians, engineers, and computer scientists.

Front Matter Front Matter (pp. ivi) 
Table of Contents Table of Contents (pp. viix) 
List of Algorithms List of Algorithms (pp. xixii) 
Foreword Foreword (pp. xiiixiv)Paul Van DoorenConstrained optimization is quite well established as an area of research, and there exist several powerful techniques that address general problems in that area. In this book a special class of constraints is considered, called geometric constraints, which express that the solution of the optimization problem lies on a manifold. This is a recent area of research that provides powerful alternatives to the more general constrained optimization methods. Classical constrained optimization techniques work in an embedded space that can be of a much larger dimension than that of the manifold. Optimization algorithms that work on the manifold have therefore a...

Notation Conventions Notation Conventions (pp. xvxvi) 
Chapter One Introduction Chapter One Introduction (pp. 14)This book is about the design of numerical algorithms for computational problems posed on smooth search spaces. The work is motivated by matrix optimization problems characterized by symmetry or invariance properties in the cost function or constraints. Such problems abound in algorithmic questions pertaining to linear algebra, signal processing, data mining, and statistical analysis. The approach taken here is to exploit the special structure of these problems to develop efficient numerical procedures.
An illustrative example is the eigenvalue problem. Because of their scale invariance, eigenvectors are not isolated in vector spaces. Instead, each eigendirection defines a linear subspace of eigenvectors....

Chapter Two Motivation and Applications Chapter Two Motivation and Applications (pp. 516)The problem of optimizing a realvalued function on a matrix manifold appears in a wide variety of computational problems in science and engineering. In this chapter we discuss several examples that provide motivation for the material presented in later chapters. In the first part of the chapter, we focus on the eigenvalue problem. This application receives special treatment because it serves as a running example throughout the book. It is a problem of unquestionable importance that has been, and still is, extensively researched. It falls naturally into the geometric framework proposed in this book as an optimization problem whose natural...

Chapter Three Matrix Manifolds: FirstOrder Geometry Chapter Three Matrix Manifolds: FirstOrder Geometry (pp. 1753)The constraint sets associated with the examples discussed in Chapter 2 have a particularly rich geometric structure that provides the motivation for this book. The constraint sets arematrix manifoldsin the sense that they are manifolds in the meaning of classical differential geometry, for which there is a natural representation of elements in the form of matrix arrays.
The matrix representation of the elements is a key property that allows one to provide a natural development of differential geometry in a matrix algebra formulation. The goal of this chapter is to introduce the fundamental concepts in this direction: manifold...

Chapter Four LineSearch Algorithms on Manifolds Chapter Four LineSearch Algorithms on Manifolds (pp. 5490)Linesearch methods in ℝ^{n}are based on the update formula
\[x_{k + 1} = x_k + t_k \eta _k, \caption{(4.1)}\] where η_{k}∈ ℝ^{n}is thesearch directionand t_{k}∈ ℝ is thestep size. The goal of this chapter is to develop an analogous theory for optimization problems posed on nonlinear manifolds.
The proposed generalization of (4.1) to a manifold 𝓜 consists of selecting η_{k}as a tangent vector to 𝓜 atx_{k}and performing a search along a curve in 𝓜 whose tangent vector att= 0 is η_{k}. The selection of the curve relies on the concept of retraction, introduced in Section 4.1. The...

Chapter Five Matrix Manifolds: SecondOrder Geometry Chapter Five Matrix Manifolds: SecondOrder Geometry (pp. 91110)Many optimization algorithms make use of secondorder information about the cost function. The archetypal secondorder optimization algorithm is Newton’s method. This method is an iterative method that seeks a critical point of the cost functionf(i.e., a zero of gradf) by selecting the update vector atx_{k}as the vector along which the directional derivative of gradfis equal to −gradf(x_{k}). The secondorder information on the cost function is incorporated through the directional derivative of the gradient.
For a quadratic cost function in 𝓡^{n}, Newton’s method identifies a zero of the gradient in one step. For...

Chapter Six Newton’s Method Chapter Six Newton’s Method (pp. 111135)This chapter provides a detailed development of the archetypal secondorder optimization method, Newton’s method, as an iteration on manifolds. We propose a formulation of Newton’s method for computing the zeros of a vector field on a manifold equipped with an affine connection and a retraction. In particular, when the manifold is Riemannian, this geometric Newton method can be used to compute critical points of a cost function by seeking the zeros of its gradient vector field. In the case where the underlying space is Euclidean, the proposed algorithm reduces to the classical Newton method. Although the algorithm formulation is provided...

Chapter Seven TrustRegion Methods Chapter Seven TrustRegion Methods (pp. 136167)The plain Newton method discussed in Chapter 6 was shown to be locally convergent to any critical point of the cost function. The method does not distinguish among local minima, saddle points, and local maxima: all (nondegenerate) critical points are asymptotically stable fixed points of the Newton iteration. Moreover, it is possible to construct cost functions and initial conditions for which the Newton sequence does not converge. There even exist examples where the set of nonconverging initial conditions contains an open subset of search space.
To exploit the desirable superlinear local convergence properties of the Newton algorithm in the context...

Chapter Eight A Constellation of Superlinear Algorithms Chapter Eight A Constellation of Superlinear Algorithms (pp. 168188)The Newton method (Algorithm 5 in Chapter 6) applied to the gradient of a realvalued cost is the archetypal superlinear optimization method. The Newton method, however, suffers from a lack of global convergence and the prohibitive numerical cost of solving the Newton equation (6.2) necessary for each iteration. The trustregion approach, presented in Chapter 7, provides a sound framework for addressing these shortcomings and is a good choice for a generic optimization algorithm. Trustregion methods, however, are algorithmically complex and may not perform ideally on all problems. A host of other algorithms have been developed that provide lowercost numerical iterations...

Appendix A Elements of Linear Algebra, Topology, and Calculus Appendix A Elements of Linear Algebra, Topology, and Calculus (pp. 189200) 
Bibliography Bibliography (pp. 201220) 
Index Index (pp. 221224)