Skip to Main Content
Have library access? Log in through your library
Control Theoretic Splines

Control Theoretic Splines: Optimal Control, Statistics, and Path Planning

Magnus Egerstedt
Clyde Martin
Copyright Date: 2010
Pages: 228
  • Cite this Item
  • Book Info
    Control Theoretic Splines
    Book Description:

    Splines, both interpolatory and smoothing, have a long and rich history that has largely been application driven. This book unifies these constructions in a comprehensive and accessible way, drawing from the latest methods and applications to show how they arise naturally in the theory of linear control systems. Magnus Egerstedt and Clyde Martin are leading innovators in the use of control theoretic splines to bring together many diverse applications within a common framework. In this book, they begin with a series of problems ranging from path planning to statistics to approximation. Using the tools of optimization over vector spaces, Egerstedt and Martin demonstrate how all of these problems are part of the same general mathematical framework, and how they are all, to a certain degree, a consequence of the optimization problem of finding the shortest distance from a point to an affine subspace in a Hilbert space. They cover periodic splines, monotone splines, and splines with inequality constraints, and explain how any finite number of linear constraints can be added. This book reveals how the many natural connections between control theory, numerical analysis, and statistics can be used to generate powerful mathematical and analytical tools.

    This book is an excellent resource for students and professionals in control theory, robotics, engineering, computer graphics, econometrics, and any area that requires the construction of curves based on sets of raw data.

    eISBN: 978-1-4008-3387-0
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-vi)
  2. Table of Contents
    (pp. vii-viii)
    (pp. ix-x)
  4. Chapter One INTRODUCTION
    (pp. 1-10)

    The basic problem that the classical spline was constructed to solve was as follows: Given a finite set of data points, find a smooth curve that interpolates through these points. Of course, there are infinitely many such curves, and the real task is to devise an algorithm that selects a unique (hopefully exhibiting certain desirable properties) curve. In fact, classical splines solve this problem by requiring that the curve be piecewise polynomial, that is, that it be polynomial between the data points, and that the pieces be connected as smoothly as possible. Often additional conditions must be applied as well...

    (pp. 11-24)

    Given the state of a linear system (e.g., the position and velocity of a car, the currents in an electric network, or the distribution of susceptible, infected, and immune populations in epidemiology dynamics), denoted by$x(t)\; \in \;{\mathbb{R}^n},$we will study how this state evolves over time intervals [0, T]. Moreover, we will be interested in certain measurable aspects of the system (e.g., distance traveled in the car, voltage across a particular component in the network, or the rate at which healthy individuals become infected), and we denote this measurable output by$y(t)\; \in \;{\mathbb{R}^p}.$The final component needed for understanding the various signals...

    (pp. 25-52)

    In Problem 1, we show that the theory of interpolating splines is naturally considered as a problem of minimizing a quadratic cost functional subject to a set of linear constraints, and in Problem 3, we show that the theory of smoothing splines can be considered as very close to the theory of interpolating splines, the difference being that the linear constraints are included in the cost functional as a penalty term. For both Problems 1 and 3, the optimization problem is a straightforward problem of minimizing a quadratic cost functional over the space of square integrable functions.

    In Problems 2...

    (pp. 53-72)

    The theory of smoothing splines is based on the premise that a datum, α, is the sum of a deterministic part, β, and a random part, ε. It is assumed that ε is the value of a random variable drawn from some probability distribution. Smoothing splines are designed to approximate the deterministic part by minimizing the variance of the random part. Often the random variable comes from measurement error. We start this chapter with two examples in which the random error comes either from the measurements or from estimations based on incomplete data.

    A seemingly straightforward problem is to determine...

    (pp. 73-86)

    We start by stating the basic assumptions needed to arrive at the main convergence result of this chapter: Assumption 5.1${c^T}b\; = \; \cdots = {c^T}{A^{n - 2}}b = 0.$

    Assumption 5.2The matrix𝘈has only real eigenvalues.

    Assumption 5.3Let the underlying, true curve,$f\left( t \right),$be a${C^\infty }$function on an interval that contains[0, T].

    Assumption 5.4$x(0)\; = \;0$

    We now suppose that there are infinitely many data points available obtained by a repeated sampling of$f\left( t \right)$on the interval [0, T] . Let

    ${D_N} = \{ ({t_{iN}},\;{\alpha _{iN}}):i = 1,\; \cdots ,\;N\} $

    be the Nth data set, and let the union of the set of times be dense in the interval [0, T].


    (pp. 87-112)

    To establish the technique, we will solve the linear quadratic regulator problem and then use the construction for the smoothing problems. This construction is basically the same as is given in [28]. We are given a cost function

    $J(u)\: = \:\int_0^T {\left[ {{x^T}(t)Qx(t)\: + \:{u^2}(t)} \right]} \:dt$

    and a controllable linear system

    $\dot x\; = \;Ax\; + \;bu,\;x(0)\; = \;{x_0},$

    and we assume that${x_0}$is given. We also assume that the matrixQis positive definite. We define a Hilbert space

    ${\cal H} = {L_2}[0,\;T]\; \times \;L_2^n[0,\;T],$

    with norm

    $\left\| {(u;\;x)} \right\|_{\cal H}^2 = \int_0^T {\left[ {{x^T}(t)\,Qx(t)\; + \;{u^2}(t)} \right]\;dt} $.

    Let the constraint variety be defined as

    ${V_{{x_0}}} = \left\{ {\left. {(u;\;x)\;} \right|\;x(t)\; = \;{e^{At}}{x_0}\; + \;\int_0^t {{e^{A(t - s)}}} bu(s)ds} \right\}$.

    Note again that${V_{{x_0}}}$(and hence also${V_0}$is closed by the closed graph theorem. As for the discrete case, we...

    (pp. 113-132)

    In many cases, the type of construction we have seen so far in this book is not enough since one sometimes wants the generated curve to exhibit certain structural properties, such as monotonicity or convexity (see [17],[21],[41], [47],[49],[66],[76],[85]). These properties correspond to nonnegativity constraints on the first and second derivatives of the curve, respectively, and hence the nonnegative derivative constraint will be the main focus of this chapter.

    Consider the problem of constructing a curve that passes close to given data points, at the same time that we want the curve to exhibit certain monotonicity properties. In other words, if...

    (pp. 133-154)

    A basic problem in statistics is the following. Consider a data set

    $D\; = \;\{ ({t_i},\;{\alpha _i})\;:\;i = 1,\; \ldots ,\;N\} ,$

    which, for convenience, we will assume is comprised of one-dimensional data, although this is really not technically necessary. We are given some class of functions, which may be presented parametrically or nonparametrically. For example, we could be given all lines of the form$y\; = \;ax\; + \;b$(parametrically), or the space of all polynomials or smooth functions (non-parametrically).

    No matter if the problem is parametric or nonparametric, the basic idea is to, for each point, define a residue. For example,

    ${r_i}(a,\;b)\; = \;{({\alpha _i} - a{t_i} - b)^2}$

    provides a simple “distance” from the line to the...

    (pp. 155-168)

    Even though the main goal of this chapter is to derive an algorithm for transferring the state of the system between affine varieties, we will start by recalling the solution to the point-to-point transfer problem, as previously defined in Chapter 2.

    The point-to-point transfer problem involves driving a linear system of differential equations between given boundary states,

    $\left\{ \begin{array}{l}\dot x = Ax + Bu \\x({T_0}) = {x_0},\;x({T_1})\; = \;{x_1}, \\\end{array} \right.$

    where$u\; \in \;{\mathbb{R}^m}$is the control signal,$x\; \in \;{\mathbb{R}^n}$the state vector,$A\; \in \;{\mathbb{R}^{n \times n}}$, and$B\; \in \;{\mathbb{R}^{n \times m}}$.

    The point-to-point transfer should be done in such a way that a cost functional is minimized with respect to the control signal. The cost functional that we choose to...

    (pp. 169-192)

    Radio transmitters are regularly used to track wildlife. A small transmitter is attached to the subject animal and the animal is released. Over the next hours, days, or weeks, depending on the animals and on the particular research problem, signals are recorded from multiple locations at generally nonsynchronized times. These signals are uplinked to a NOAA (National Oceanic and Atmospheric Administration) weather satellite orbiting above Earth, where the signals are preprocessed and stored. Later, as the NOAA satellite passes over a ground station, the information is downlinked.

    In order to correctly specify the latitude and longitude of the tracked animal,...

  14. Chapter Eleven NODE SELECTION
    (pp. 193-204)

    So far in this book, we have seen how to generate a large collection of different curves based on given sets of data points. If these points are to be selected rather than just given somehow, one can of course ask how they should be selected. This turns out to be a question that cannot be tackled as a minimum norm problem in a Hilbert space. Nonetheless, it is such an important problem that we include it in this book for the sake of completeness.

    In particular, we consider the problem of selecting the data points in an optimal fashion...

  15. Bibliography
    (pp. 205-214)
  16. Index
    (pp. 215-217)