Prediction and Regulation by Linear Least-Square Methods

Prediction and Regulation by Linear Least-Square Methods

Peter Whittle
FOREWORD BY THOMAS J. SARGENT
Copyright Date: 1983
Edition: NED - New edition
Pages: 210
https://www.jstor.org/stable/10.5749/j.ctttsphx
  • Cite this Item
  • Book Info
    Prediction and Regulation by Linear Least-Square Methods
    Book Description:

    Prediction and Regulation by Linear Least-Square Methods was first published in 1963. This revised second edition was issued in 1983. Minnesota Archive Editions uses digital technology to make long-unavailable books once again accessible, and are published unaltered from the original University of Minnesota Press editions. During the past two decades, statistical theories of prediction and control have assumed an increasing importance in all fields of scientific research. To understand a phenomenon is to be able to predict it and to influence it in predictable ways. First published in 1963 and long out of print, Prediction and Regulation by Linear Least-Square Methods offers important tools for constructing models of dynamic phenomena. This elegantly written book has been a basic reference for researchers in many applied sciences who seek practical information about the representation and manipulation of stationary stochastic processes. Peter Whittle’s text has a devoted group of readers and users, especially among economists. This edition contains the unchanged text of the original and adds new works by the author and a foreword by economist Thomas J. Sargent._x000B_

    eISBN: 978-1-4529-3855-4
    Subjects: Statistics

Table of Contents

  1. Front Matter
    (pp. i-xii)
  2. Table of Contents
    (pp. xiii-xvi)
  3. CHAPTER 1 INTRODUCTION
    (pp. 1-11)

    A title such as “prediction theory” is apt to raise unjustified expectations, because any claim to foretell the future, in however trivial and limited a sense, can never be regarded as something completely matter of fact. For this reason, we hasten to define our position.

    Ideally, prediction is a by-product of the quantitative understanding of a situation, of aphysical model. Thus, knowledge of Newton’s laws of motion enables one to predict the paths of the planets with extreme accuracy; the laws of dynamics and elasticity enable one to predict the motion of a structure such as a bridge or...

  4. CHAPTER 2 STATIONARY AND RELATED PROCESSES
    (pp. 12-30)

    Conventional prediction theory is closely bound up with the ideas of stationary and of linear processes. This chapter is a summary (notan exposition!) of such of these ideas as are necessary to the understanding of the rest of the book. At the same time we fix our notation.

    Abbreviations which we shall use in the text are: l.s., “least square”; l.l.s., “linear least square”; l.l.s.e., “linear least square estimate”; m.s., “mean square”; m.s.e., “mean square error”; a.r., “autoregression”; m.a., “moving average”; s.d.f., “spectral density function”; and s.d.m., “spectral density matrix”.

    Very often, although not invariably, the same letter will...

  5. CHAPTER 3 A FIRST SOLUTION OF THE PREDICTION PROBLEM
    (pp. 31-45)

    In this chapter we consider a purely non-deterministic process$\left\{ x_{t} \right\}$, and construct the l.l.s.e. of a variateywhich is based upon the sample of past values$\left\( x_{s}; s \leq t \right\)$. An important case is that in whichyis a term$y_{t}$in a process$\left\{ y_{t} \right\}$jointly stationary with$\left\{ x_{t} \right\}$. If$y_{t} = x_{t+v}$, then the problem is one ofpure prediction, and most of our examples concern this special case. More general examples will be given in Chapter 6, where we describe Wiener’s method. The method used in this chapter is essentially a specialised version of one due to...

  6. CHAPTER 4 LEAST-SQUARE APPROXIMATION
    (pp. 46-55)

    This chapter is rather loosely related to the rest of the book: it can be read independently of preceding chapters, and the only unavoidable references to it are those in connection with regression sequences (Ch. 8, section 3) and certainty equivalence (Ch. 10, section 7). Nevertheless, it is the only part of the book in which an attempt is made to treat the least-square technique in any generality: for this reason it is important.

    We shall assume for the moment that all random variables have zero mean; it is understood throughout that they have finite variance. The cross-covariance matrix of...

  7. CHAPTER 5 PROJECTION ON THE INFINITE SAMPLE
    (pp. 56-65)

    We shall now consider the l.l.s. estimation of a variate$y$on the basis of a complete realisation of a stationary process$\left\{ x_{t} \right\}$. Thus, the estimate$\hat{y}$may be based upon the “future” as well as the “past” of$x$: this corresponds to the construction of what is sometimes known as aninfinite lag filter. The calculation is a useful preliminary to the one in which$y$is to be based upon the “past” of$x_{t}$alone, touched on already in Ch. 3, and to be considered further in Ch. 6. However, the case is of interest in its...

  8. CHAPTER 6 PROJECTION ON THE SEMI-INFINITE SAMPLE
    (pp. 66-71)

    Suppose that we have a “semi-infinite sample”$\left\( x_{s}, s\leq\ t \right\)$from a stationary process$\left\{ x_{t} \right\}$, and that this is to be used to construct the l.l.s.e.$\hat{y}$of a variatey. In particular,ymay be the value of$y_{t}$, where$\left\{ y_{t} \right\}$is a process jointly stationary with$\left\{ x_{t} \right\}$.

    We have already solved the problem of the construction of$\hat{y}$in section (3.7), and have studied in detail the case ofpure prediction, where$y_{t} = x_{t+v}$. In this chapter we shall consider a number of more general examples, and shall also re-derive the solution of section (3.7) by a...

  9. CHAPTER 7 PROJECTION ON THE FINITE SAMPLE
    (pp. 72-82)

    Quite often a record of observations stretching into the distant past is not available, and prediction or estimation must be based on a finite sample; on a series of observations taken over a finite interval of time. This is the case in anti-aircraft gunnery, for example, where the position of the plane must be fixed from observations extending over only a few seconds (although it may well be that observations from earlier instants of time would not materially improve the prediction). In the discrete time case we have then the problem of predicting from a sample$\left\( x_{0}, x_{1}, ... x_{n-1} \right\)$, say, and...

  10. CHAPTER 8 DEVIATIONS FROM STATIONARITY: TRENDS, DETERMINISTIC COMPONENTS AND ACCUMULATED PROCESSES
    (pp. 83-97)

    One often has to deal with non-stationary series; indeed, except in physical data, it is rare to find a series at all which is truly stationary. For example, population and economic series are evolutive, almost by their very nature. In tracking an approaching enemy aircraft one has a non-stationary situation, and even if one tries to avoid this by postulating a stream of aircraft, there is still the consideration that uniformly good prediction is unnecessary; the only error that matters is the one made at the instant of firing.

    In dropping the assumption of stationarity, one is left with scarcely...

  11. CHAPTER 9 MULTIVARIATE PROCESSES
    (pp. 98-105)

    It is probably not unfair to say that 90% of the literature is devoted to univariate rather than to multivariate processes, while in genuine applications the frequencies are reversed. One almost always works with systems whose states must be described by several variables, and ultimately one must be able to cope with the multivariate situation, although it is reasonable that ideas and methods should be developed first for the univariate case.

    In fact, with the appropriate formalism, almost all our previous work generalises immediately. There is one exception, however; the canonical factorisation of the s.d.f.$g(z)$, or$f(\omega)$,...

  12. CHAPTER 10 REGULATION
    (pp. 106-141)

    As we emphasised in the first chapter, prediction is rarely an end in itself. In most cases the predicted value, once obtained, is used to initiate or modify a course of action. For example, the predicted course of a target plane is immediately used to help aim anti-aircraft equipment; sales forecasts are used to help plan production.

    In this larger context the problem of prediction appears only as incidental, and the central problem is that ofregulation, i.e. of using past values to determine present action in such a way that the future course of the process is as near...

  13. CHAPTER 11 LLS PREDICTION, 1982: A PERSONAL VIEW
    (pp. 142-158)

    In these two chapters we use notation which has become widely accepted as standard in recent years rather than persist with the notation of Chapters 1 through 10. This seems preferable if consequent inconsistencies can be remedied by an immediate clarification of conventions.

    The inconsistencies can be summarised: the quantities denoted$y$,$x$,$u$and$U$in Chapter 10 are now denoted$x$,$u$,$\overline{x}$and$T$, respectively.

    Explicitly,$x$denotes theprocess variable, the variable in terms of which the dynamics of the underlying process are expressed. This will be the state variable if the problem has state-structure. In...

  14. CHAPTER 12 LQG REGULATION AND CONTROL, 1982: A PERSONAL VIEW
    (pp. 159-179)

    In the first edition, the term “regulation” was used as synonymous with “control”. However, it has come to be used to describe the class of control problems in which one seeks to hold the process variable (or relevant components of it) at a prescribed fixed value, and our interests are more general than that.

    In this chapter we consider the recursive methods of control optimisation yielded by appeal to the dynamic programming principle, first for a general stochastic formulation and then for the two quite distinct specialisations of LQG structure and of state structure. However, in the case of a...

  15. REFERENCES
    (pp. 180-181)
  16. SUPPLEMENTARY REFERENCES FOR THE SECOND EDITION
    (pp. 182-184)
  17. NAME AND SUBJECT INDEX
    (pp. 185-188)
  18. Back Matter
    (pp. 189-189)