Empirical Model Discovery and Theory Evaluation

Empirical Model Discovery and Theory Evaluation: Automatic Selection Methods in Econometrics

David F. Hendry
Jurgen A. Doornik
Copyright Date: 2014
Published by: MIT Press
Pages: 392
https://www.jstor.org/stable/j.ctt9qf9km
  • Cite this Item
  • Book Info
    Empirical Model Discovery and Theory Evaluation
    Book Description:

    Economic models of empirical phenomena are developed for a variety of reasons, the most obvious of which is the numerical characterization of available evidence, in a suitably parsimonious form. Another is to test a theory, or evaluate it against the evidence; still another is to forecast future outcomes. Building such models involves a multitude of decisions, and the large number of features that need to be taken into account can overwhelm the researcher. Automatic model selection, which draws on recent advances in computation and search algorithms, can create, and then empirically investigate, a vastly wider range of possibilities than even the greatest expert. In this book, leading econometricians David Hendry and Jurgen Doornik report on their several decades of innovative research on automatic model selection. After introducing the principles of empirical model discovery and the role of model selection, Hendry and Doornik outline the stages of developing a viable model of a complicated evolving process. They discuss the discovery stages in detail, considering both the theory of model selection and the performance of several algorithms. They describe extensions to tackling outliers and multiple breaks, leading to the general case of more candidate variables than observations. Finally, they briefly consider selecting models specifically for forecasting.

    eISBN: 978-0-262-32441-0
    Subjects: Economics

Table of Contents

  1. Front Matter
    (pp. i-iv)
  2. Table of Contents
    (pp. v-xii)
  3. About the Arne Ryde Foundation
    (pp. xiii-xiv)
  4. Preface
    (pp. xv-xx)
  5. Acknowledgments
    (pp. xxi-xxiv)
  6. Glossary
    (pp. xxv-xxvi)
  7. Data and Software
    (pp. xxvii-xxviii)
  8. I Principles of Model Selection
    • 1 Introduction
      (pp. 3-16)

      This chapter provides an overview of the book. Models of empirical phenomena are needed for four main reasons: understanding the evolution of data processes, testing subject-matter theories, forecasting future outcomes, and conducting policy analyses. All four intrinsically involve discovery, since many features of all economic models lie outside the purview of prior reasoning, theoretical analyses or existing evidence. Economies are so high dimensional, evolutionary from many sources of innovation, and non-constant from intermittent, often unanticipated, shifts that discovering their properties is the key objective of empirical modeling. Automatic selection methods can outperform experts in formulating models when there are many...

    • 2 Discovery
      (pp. 17-30)

      This chapter begins by discussing some of the many ways in which discoveries have been made in physical and biological sciences. There seem to be seven common aspects of such discoveries and their subsequent evaluations. Despite important differences, discovery and evaluation in economics are similar to those of other sciences, and the same seven common aspects can be discerned. The complexity of macroeconomic data intrinsically involves empirical discovery with theory evaluation, as well as requiring rigorous evaluation of selected models to ascertain their viability. Directly fitting a theory-specified model limits the potential for discovery. Retrospectively, covert discovery has been common...

    • 3 Background to Automatic Model Selection
      (pp. 31-56)

      This chapter explains the background in more detail, to provide a framework for the analysis in the rest of the book. We first note the many criticisms of model selection that have been made. While many of these are valid for some approaches, we will show that almost all are rebutted for general-to-specific (Gets) model selection. Many criticisms are based on an assumed level of knowledge where discovery is unnecessary, so fail to address that key issue. The main aim of this chapter is to walk through the six stages leading from simple selection to model discovery in a context...

    • 4 Empirical Modeling Illustrated
      (pp. 57-70)

      This chapter illustrates the discussion in Chapter 3 usingAutometricson an artificial data generation process first created almost 30 years ago for an early version ofPcGive. In a setting where the DGP is known to the investigator, but obviously not to the software, what is found can be checked against what should have been found. The major points of Chapter 3 that are implemented are estimating the system of four simultaneous equations; diagnostic checking for the selection of a single equation being well-specified; parsimonious-encompassing tests of it against its GUM; testing for non-linearities; handling more candidate variables than...

    • 5 Evaluating Model Selection
      (pp. 71-84)

      Empirical models could be chosen according to many criteria, as this chapter discusses. Selection criteria can conflict, such as achieving empirical congruence may thwart theory consistency and vice versa, so we consider nine possible ways to judge the success of selection. Of these, four seem infeasible, two are widely used but seem suspect, so we focus on the remaining three practical criteria, namely a selection algorithm’s ability to recover the local data-generating process (LDGP) starting from the general unrestricted model as often as when starting from the LDGP itself; whether the operating characteristics of the algorithm match their desired properties;...

    • 6 The Theory of Reduction
      (pp. 85-96)

      A well-defined sequence of reduction operations leads from the data-generating process (DGP) of the economy under analysis to the local DGP (LDGP), which is the generating process in the space of the variables under analysis. The resulting LDGP may be complex, non-linear and non-constant from aggregation, marginalization, and sequential factorization, depending on the choice of the set of variables under analysis. A good choice—one where there are no, or only small, losses of information from the reductions—is crucial if the DGP is to be viably captured by any empirical modeling exercise. In practice, the LDGP is also approximated...

    • 7 General-to-specific Modeling
      (pp. 97-114)

      After noting a variety of extant approaches to automatic model selection, we consider the six main stages in formulating and implementing aGetsapproach to model discovery. First, a careful formulation of the general unrestricted model (GUM) for the problem under analysis is essential. Second, the measure of congruence must be decided by choosing the mis-specification tests to be used, their forms, and significance levels. Third, the desired null rejection frequencies for selection tests must be set, together with an information criterion to select between mutually encompassing, undominated, congruent models. Fourth, the GUM needs to be appropriately estimated, depending on...

  9. II Model Selection Theory and Performance
    • 8 Selecting a Model in One Decision
      (pp. 117-126)

      We now consider the special case in which a congruent, constant regression model in mutually orthogonal, valid conditioning variables can be successfully selected in one decision using the criteria discussed in chapter 5. This establishes a baseline, which demonstrates that the false null retention rate can be controlled, and that repeated testing is not an intrinsic aspect of model selection, even if there are 10³⁰⁰ possible models, as occurs here when N = 1000. Goodness-of-fit estimates, mean squared errors, and the consistency of the selection are all discussed. However, the estimates from the selected model do not have the same...

    • 9 The 2-variable DGP
      (pp. 127-132)

      One of the few settings where analytical distributions are available is 1-cut selection in a 2-variable, constant parameter, linear regression model that coincides with the DGP. Leeb and Pötscher (2003, 2005) derive the distributions of post model-selection estimators, their associated confidence intervals, and estimator biases, and establish that asymptotic derivations do not hold uniformly for local alternatives. Consequently, finite-sample behavior could differ markedly from the asymptotic distribution. We review their main results, consider some implications of their analyses, and discuss what they might entail in the realistic setting when the GUM is more general than the LDGP, as the latter...

    • 10 Bias Correcting Selection Effects
      (pp. 133-140)

      We develop approximate bias corrections for the conditional distributions of the estimated parameters of retained variables after model selection, such that approximately unbiased estimates of their coefficients are delivered. Such corrections also drive estimated coefficients of irrelevant variables towards the origin, substantially reducing their mean squared errors (MSEs). We illustrate the theory by simulating selection from N = 1000 variables, to examine the impacts of our approach on estimated coefficient MSE s for both relevant and irrelevant variables in their conditional and unconditional distributions.

      The estimates from the selected model do not have the same properties as if the LDGP...

    • 11 Comparisons of 1-cut Selection with Autometrics
      (pp. 141-150)

      Having established that the properties of bias-corrected 1-cut selections match our three criteria, having appropriate gauge and potency, with near unbiased estimates, and relatively small MSEs, we now evaluate the comparative properties ofAutometrics. First, the tree-search algorithm ofAutometricsis described and its operational procedures outlined. The framework remains the same as in chapter 8, namely a congruent, constant regression model in mutually orthogonal, valid conditioning variables, but unlike 1-cut,Autometricsdoes not use, nor need, to exploit orthogonality. We wish to evaluate whether despite exploring many paths, there is any deterioration in the selection quality: in several important...

    • 12 Impact of Diagnostic Tests
      (pp. 151-158)

      Chapter 7 considered the main mis-specification tests inGetsmodel selection using an information taxonomy of past, present and future data, theory and measurement information and rival models. The first seeks a homoskedastic innovation error {εt}; the second weak exogeneity of conditioning variables for the parameters of interestθ(say); the third, constant, invariant parameters,θ; the fourth theory consistent, identifiable structures; the fifth data-admissible formulations on accurate observations; and the sixth, encompassing rival models. We now address the specific mis-specification tests used inAutometricsto determine congruence, and consider their operating characteristics when applied to the DGP, the GUM...

    • 13 Role of Encompassing
      (pp. 159-166)

      Encompassing seeks to reconcile competing empirical models, all of which claim to explain some economic phenomena. If distinct competing models exist, all but one must either be incomplete or incorrect—and all may be false. By testing whether one model can account for the results found by the other models, investigators can learn how well their model performs relative to those, as well as reduce the class of admissible models. Some features of the LDGP may not be included, so different empirical models capture different sets of salient features. All empirical models are encompassed by the LDGP, in that knowledge...

    • 14 Retaining a Theory Model During Selection
      (pp. 167-174)

      Economic theories are often fitted directly to data to avoid any issues of model selection. This is an excellent strategy when the theory is complete and correct, but less successful otherwise. We consider embedding a theory model that specifies the set ofnrelevant exogenous variables, xt, within a larger set ofn+kcandidate variables, (xt, wt), but only select over the wt. When the theory model is complete and correct, so the wtare in fact irrelevant, by orthogonalizing them with respect to the xt, selection over the orthogonalized components can be undertaken without affecting the theory parameters’ estimator...

    • 15 Detecting Outliers and Breaks Using IIS
      (pp. 175-194)

      The last of our six stages concerns detecting outliers and breaks. Impulse-indicator saturation (IIS: see Hendry et al., 2008, and Johansen and Nielsen, 2009) is analyzed under the null of no outliers, but with the aim of detecting and removing outliers and location shifts when they are present. The procedure creates an indicator for every observation, entered (in the simplest case) in blocks of T/2, noting that indicators are mutually orthogonal. First, add half the indicators, select as usual, record the outcome, then drop that set of indicators; next add the other half, selecting again. These first two steps correspond...

    • 16 Re-modeling UK Real Consumers’ Expenditure
      (pp. 195-202)

      Many features of viable empirical models cannot be derived from economic theory, institutional knowledge, or past findings alone, and need to be based on current empirical evidence. Important aspects that have to be data-based include the complete set of explanatory variables, all their functional forms and the lag reactions to these, as well as any structural breaks. Specification of the general model is mainly up to the investigator, but can be supplemented by mis-specification information, as well as by commencing from a more general embedding model and selecting automatically. We consider the specific application of modeling aggregate real consumers’ expenditure...

    • 17 Comparisons of Autometrics with Other Approaches
      (pp. 203-222)

      There are many possible methods of model selection, most of which can be implemented in an automatic algorithm. General contenders include single path approaches such as forward and backward selection, mixed variants like step-wise, multi-path search methods including Hoover and Perez (1999) andPcGets,information criteria (AIC, BIC etc.), Lasso, and RETINA, as well as a number of selection algorithms specifically designed for a forecasting context (such as PIC: see e.g., Phillips, 1995, 1996, and Phillips and Ploberger, 1996). Here we only consider the former group of methods in relation toAutometrics, partly to evaluate improvements over time in the...

    • 18 Model Selection in Underspecified Settings
      (pp. 223-230)

      Despite seeking to commence an empirical study from a general initial specification that nests the LDGP for the set of variables under analysis, the GUM may be an underspecification. Moreover, the selection of the variables to analyze could lead to a poor representation of the economic data generation process. In this setting, model selection, rather than just fitting a prior specification, may help. Impulse-indicator saturation can correct non-constancies induced by location shifts in omitted variables that alter the intercepts of models. Since IIS is a robust estimation method, it can mitigate some of the adverse effects of induced location shifts...

  10. III Extensions of Automatic Model Selection
    • 19 More Variables than Observations
      (pp. 233-242)

      We now move to also usingAutometricsas a new way of thinking about a range of problems previously deemed almost intractable. In this part, we consider five major areas. First, the approach of impulse-indicator saturation leads in this chapter to ways of handling excess numbers of variables, NT , based on a mixture of reduction and expansion steps, with an empirical illustration. Secondly, IIS also allows the investigation of multiple breaks, addressed in chapter 20. Third, chapter 21 considers selecting nonlinear models, which raise some new issues, and also often involve more candidate variables than observations. In turn, applying...

    • 20 Impulse-indicator Saturation for Multiple Breaks
      (pp. 243-252)

      Chapter 15 considered the theory and practice of IIS, to show that the cost of applying that approach under the null of no outliers or breaks was low at reasonably tight significance levels. A pilot Monte Carlo illustrated its ability to detect outliers and breaks. Chapter 19 discussed the algorithm inAutometricsfor implementing IIS together with selecting variables. Since the objective of IIS is to detect and help model breaks, in this chapter, we extend the range of experiments from D₁–D₃ considered earlier. A variety of breaks is examined, from a single start or end of sample location...

    • 21 Selecting Non-linear Models
      (pp. 253-262)

      The selection of a non-linear model often begins from a previous linear model and adds non-linear terms. Such an approach is specific-to-general in two respects. First, between studies, advances are bound to be generalizations as new knowledge accumulates, which is in part why scientific progress is so difficult. Second, however, one should not just extend the best earlier model, which was implicitly selected to accommodate all omitted effects, but commence with an identified and congruent general non-linear approximation which enters all the linear terms unrestrictedly and includes a complete set of impulse indicators so that non-linearities do not mis-represent breaks...

    • 22 Testing Super Exogeneity
      (pp. 263-278)

      An automatically computable test is described, with null rejection frequencies that are close to the nominal size, and potency for failures of super exogeneity. Impulse-indicator saturation is undertaken in the marginal models of the putative exogenous variables that enter the conditional model contemporaneously, and all significant outcomes are recorded. These indicators from the marginal models are added to the conditional model and tested for significance. Under the null of super exogeneity, the test has the correct gauge for a range of sizes of marginal-model saturation tests, both when those processes are constant, and when they undergo shifts in either mean...

    • 23 Selecting Forecasting Models
      (pp. 279-308)

      Forecasting is different: the past is fixed, but the future is not. Practical forecasting methods rely on extrapolating presently available information into the future. No matter how good such methods are, they require that the future resembles the present in the relevant attributes. Intermittent unanticipated shifts violate that requirement, and breaks have so far eluded being predicted. If no location shifts ever occurred, then the most parsimonious, congruent, undominated model in-sample would tend to dominate out of sample as well. However, if data processes are wide-sense non-stationary, different considerations matter for formulating, selecting, or using a forecasting model. In practice,...

    • 24 Epilogue
      (pp. 309-316)

      An Epilogue allows authors to recount and explain the plot; it is an end, but not a conclusion. Much research remains necessary to fully develop automatic methods for empirical model discovery with theory evaluation. Nevertheless, comparing our understanding with that of two decades ago, huge advances have occurred, and major strides have been achieved in the computer algorithms that underpin such an approach. Here we summarize the developments described in the book, and present a few final thoughts about the way ahead.

      Economic analysis reasons about the economy, which together with the measurement process by which economic outcomes are observed,...

  11. References
    (pp. 317-342)
  12. Author Index
    (pp. 343-348)
  13. Index
    (pp. 349-358)