Robust Optimization

Robust Optimization

Aharon Ben-Tal
Laurent El Ghaoui
Arkadi Nemirovski
Copyright Date: 2009
Pages: 564
https://www.jstor.org/stable/j.ctt7sk8p
  • Cite this Item
  • Book Info
    Robust Optimization
    Book Description:

    Robust optimization is still a relatively new approach to optimization problems affected by uncertainty, but it has already proved so useful in real applications that it is difficult to tackle such problems today without considering this powerful methodology. Written by the principal developers of robust optimization, and describing the main achievements of a decade of research, this is the first book to provide a comprehensive and up-to-date account of the subject.

    Robust optimization is designed to meet some major challenges associated with uncertainty-affected optimization problems: to operate under lack of full information on the nature of uncertainty; to model the problem in a form that can be solved efficiently; and to provide guarantees about the performance of the solution.

    The book starts with a relatively simple treatment of uncertain linear programming, proceeding with a deep analysis of the interconnections between the construction of appropriate uncertainty sets and the classical chance constraints (probabilistic) approach. It then develops the robust optimization theory for uncertain conic quadratic and semidefinite optimization problems and dynamic (multistage) problems. The theory is supported by numerous examples and computational illustrations.

    An essential book for anyone working on optimization and decision making under uncertainty,Robust Optimizationalso makes an ideal graduate textbook on the subject.

    eISBN: 978-1-4008-3105-0
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-iv)
  2. Table of Contents
    (pp. v-viii)
  3. Preface
    (pp. ix-xxii)
    Aharon Ben-Tal, Laurent El Ghaoui and Arkadi Nemirovski
  4. PART I. ROBUST LINEAR OPTIMIZATION
    • Chapter One Uncertain Linear Optimization Problems and their Robust Counterparts
      (pp. 3-26)

      In this chapter, we introduce the concept of the uncertain Linear Optimization problem and its Robust Counterpart, and study the computational issues associated with the emerging optimization problems.

      Recall that the Linear Optimization (LO)problemis of the form

      $\mathop {{\rm{min}}}\limits_x \{ {c^T}x + d:Ax \le b\} $, (1.1.1)

      where$x\; \in \;{\mathbb{R}^n}$is the vector of decision variables,$c\; \in \;{\mathbb{R}^n}$and$d\; \in \;\mathbb{R}$form theobjective,Ais an$m\; \times \;n$constraint matrix, and$b\; \in \;{\mathbb{R}^m}$is theright hand side vector.

      Clearly, the constant termdin the objective, while affecting the optimal value, does not affect the optimal solution, this is why it is traditionally skipped. As we shall see, when...

    • Chapter Two Robust Counterpart Approximations of Scalar Chance Constraints
      (pp. 27-66)

      The question posed in the title of this section goes beyond general-type theoretical considerations — this is mainly a modeling issue that should be resolved on the basis of application-driven considerations. There is however a special case where this question makes sense and can, to some extent, be answered — this is the case where our goal is not to build an uncertainty model “from scratch,” but rather totranslatean already existing uncertainty model, namely, a stochastic one, to the language of “uncertain-but-bounded” perturbation sets and the associated robust counterparts. By exactly the same reasons as in the previous section, we...

    • Chapter Three Globalized Robust Counterparts of Uncertain LO Problems
      (pp. 67-80)

      In this chapter we extend the concept of Robust Counterpart in order to gain certain control on what happens when the actual data perturbations run out of the postulated perturbation set.

      Let us come back to Assumptions A.1 – A.3 underlying the concept of Robust Counterpart and concentrate on A.3. This assumption is not a “universal truth” — in reality, there are indeed constraints that cannot be violated (e.g., you cannot order a negative supply), but also constraints whose violations, while undesirable, can be tolerated to some degree, (e.g., sometimes you can tolerate a shortage of a certain resource by implementing an...

    • Chapter Four More on Safe Tractable Approximations of Scalar Chance Constraints
      (pp. 81-146)

      This chapter can be treated as an “advanced extension” of chapter 2. The entity of our major interest is the chance constrained version

      $p(z) \equiv {\rm{Prob}}\left\{ {{z_0} + \sum\limits_{\ell = 1}^L {{z_\ell }{\zeta _\ell } > 0} } \right\} \le \varepsilon $(4.0.1)

      of a randomly perturbed linear inequality

      ${z_0} + \sum\limits_{\ell = 1}^L {{z_\ell }{\zeta _\ell }} \le 0,$(4.0.2)

      where${\zeta _\ell }$are random perturbations and${z_\ell }$are deterministic parameters (in applications in Uncertain Linear Optimization, these parameters will be specified as affine functions of the decision variables).

      Recall that asafe approximationof the chance constraint (4.0.1) is a systemSof convex constraints in variableszand perhaps additional variablesusuch that the projection$Z[\mathcal{S}]\;$of the feasible set of the system onto the...

  5. PART II. ROBUST CONIC OPTIMIZATION
    • Chapter Five Uncertain Conic Optimization: The Concepts
      (pp. 149-158)

      In this chapter, we extend the RO methodology ontonon-linearconvex optimization problems, specifically,conicones.

      Aconicoptimization (CO) problem (also calledconic program) is of the form

      $\mathop {{\rm{mix}}}\limits_x \{ {c^T}x\, + \;d:\;Ax\; - \;b\; \in \;{\bf{K}}\} $, (5.1.1)

      where$x\; \in \;{\mathbb{R}^n}$is the decision vector,${\mathbf{K}}\; \subset \;{\mathbb{R}^m}$is a closed pointed convex cone with a nonempty interior, and$x \mapsto \;Ax\; - \;b$is a given affine mapping from${\mathbb{R}^n}$to${\mathbb{R}^m}$. Conic formulation is one of the universal forms of a Convex Programming problem; among the many advantages of this specific form is its “unifying power.” An extremely wide variety of convex programs is covered by just three types of cones:

      i)...

    • Chapter Six Uncertain Conic Quadratic Problems with Tractable RCs
      (pp. 159-178)

      In this chapter we focus on uncertain conic quadratic problems (that is, the sets${Q_i}$in (5.1.2) are given by explicit lists of conic quadratic inequalities) for which the RCs are computationally tractable.

      We start with a simple case where the RC of an uncertain conic problem (not necessarily a conic quadratic one) is computationally tractable — the case ofscenario uncertainty.

      Definition 6.1.1. We say that a perturbation setZis scenario generated, ifZis given as the convex hull of a given finite set of scenarios${\zeta ^{(\nu )}}$:

      $\mathcal{Z}\; = \;{\text{Conv\{ }}{\zeta ^{(1)}},\; \ldots ,\;{\zeta ^{(N)}}\} $. (6.1.1)

      Theorem 6.1.2. The RC (5.1.5) of uncertain problem (5.1.2),...

    • Chapter Seven Approximating RCs of Uncertain Conic Quadratic Problems
      (pp. 179-202)

      In this chapter we focus ontighttractable approximations of uncertain CQIs — those with tightness factor independent (or nearly so) of the “size” of the description of the perturbation set. Known approximations of this type deal with side-wise uncertainty and two types of the left hand side perturbations: the first is the case ofstructured norm-bounded perturbationsto be considered in section 7.1, while the second is the case of$ \cap $-ellipsoidalleft hand side perturbation sets to be considered in section 7.2.

      Consider the case where the uncertainty in CQI (6.1.3) is side-wise with the right hand side uncertainty as...

    • Chapter Eight Uncertain Semidefinite Problems with Tractable RCs
      (pp. 203-224)

      In this chapter, we focus on uncertain Semide¯nite Optimization (SDO) problems for which tractable Robust Counterparts can be derived.

      Recall that a semide¯nite program (SDP) is a conic optimization program

      $\mathop {{\text{min}}}\limits_x \;\left\{ {{c^T}x\; + \;d:\;{\mathcal{A}_i}(x)\; \equiv \;\sum\limits_{j = 1}^n {{x_j}{A^{ij}} - {B_i} \in \;{\mathbf{S}}_ + ^{{k_i}},\;i\; = \;1,\; \ldots ,\;m} } \right\}$

      $\mathop {{\text{min}}}\limits_x \;\left\{ {{c^T}x\; + \;d:\;{\mathcal{A}_i}(x)\; \equiv \;\sum\limits_{j = 1}^n {{x_j}{A^{ij}} - {B_i} \succcurlyeq \;{\text{0}},\;i\; = \;1,\; \ldots ,\;m} } \right\}$(8.1.1)

      where${A^{ij}}$,${B_i}$are symmetric matrices of sizes${k_i}\; \times \;{k_i}$,${\mathbf{S}}_ + ^k$is the cone of real symmetric positive semidefinite$k\; \times \;k$matrices, and$A \succcurlyeq B$means thatA,Bare symmetric matrices of the same sizes such that the matrix$A - B$is positive semide¯nite. A constraint of the form$\mathcal{A}x\; - \;B \equiv \;\sum\limits_j {{x_j}{A^j} - \;B\; \succcurlyeq \;0} $with symmetric${A^j}$,Bis called aLinear MatrixInequality (LMI); thus, an SDP is the problem of minimizing a...

    • Chapter Nine Approximating RCs of Uncertain Semidefinite Problems
      (pp. 225-234)

      We have seen that the possibility to reformulate the RC of an uncertain semidefinite program in a computationally tractable form is a “rare commodity,” so that there are all reasons to be interested in the second best thing — in situations where the RC admits a tight tractable approximation. To the best of our knowledge, just one such case is known — the case ofstructured norm-bounded uncertaintywe are about to consider in this chapter.

      Consider an uncertain LMI

      ${\mathcal{A}_\zeta }(y)\; \succcurlyeq \;0$(8:2:2)

      where the “body”${\mathcal{A}_\zeta }(y)$is bi-linear in the design vectoryand the perturbation vector$\zeta $. The definition of a...

    • Chapter Ten Approximating Chance Constrained CQIs and LMIs
      (pp. 235-278)

      In this chapter, we develop safe tractable approximations ofchance constrainedrandomly perturbed Conic Quadratic and Linear Matrix Inequalities.

      In previous chapters we have considered the Robust/Approximate Robust Counterparts of uncertain conic quadratic and semidefinite programs. Now we intend to considerrandomly perturbedCQPs and SDPs and to derive safe approximations of their chance constrained versions (cf. section 2.1). From this perspective, it is convenient to treat chance constrained CQPs as particular cases of chance constrained SDPs (such an option is given by Lemma 6.3.3), so that in the sequel we focus on chance constrained SDPs. Thus, we are interested...

    • Chapter Eleven Globalized Robust Counterparts of Uncertain Conic Problems
      (pp. 279-300)

      In this chapter we study the Globalized Robust Counterparts of general-type uncertain conic problems and derive results on tractability of GRCs.

      Consider an uncertain conic problem (5.1.2), (5.1.3):

      $\mathop {{\text{mix}}}\limits_x \;\left\{ {{c^T}x\; + \;d:\;{A_i}x\; - \;{b_i} \in \;{Q_i},\;1\; \leqslant \;i\; \leqslant \;m} \right\}$, (11.1.1)

      where${Q_i} \subset \;{\mathbb{R}^{{k_i}}}$are nonempty closed convex sets given by ¯nite lists of conic inclusions:

      ${Q_i} = \;\{ u\; \in \;{\mathbb{R}^{{k_i}}}:\;{Q_{i\ell }}u\; - {q_{i\ell }} \in \;{{\mathbf{K}}_{i\ell }},\;\ell \; = \;1,\; \ldots ,\;{L_i}\} $, (11.1.2)

      with closed convex pointed cones${{\mathbf{K}}_{i\ell }}$, and let the data be affinely parameterized by the perturbation vector$\zeta $:

      $(c,\;d,\;\{ {A_i},\;{b_i}\} _{i = 1}^m)\; = \;({c^0},\;{d^0},\;\{ A_i^0,\;b_i^0\} _{i = 1}^m)\; + \;\sum\limits_{\ell = 1}^L {{\zeta _\ell }({c^\ell },\;{d^\ell },\;\{ A_i^\ell ,\;b_i^\ell \} _{i = 1}^m)} $, (11.1.3)

      When extending the notion of Globalized Robust Counterparts (chapter 3) to this case, we need a small modi¯cation; when introducing the notion of GRCs in the LO case, we assumed that the set${\mathcal{Z}_ + }$...

    • Chapter Twelve Robust Classification and Estimation
      (pp. 301-338)

      In this chapter, we present some applications of Robust Optimization in the context of Machine Learning and Linear Regression.

      We begin our development with an overview, the focus of which is the specific example of Support Vector Machines for binary classification.

      Binary Linear Classification.

      LetXdenote the$n \times m$matrix of data points (they are columns inX), each one belonging to one of two classes. Let$y \in {\{ - 1,1\} ^m}$be the corresponding label vector, so that${y_i} = 1$wheni-th data point is in the first class, and${y_i} = - 1$wheni-th data point is in the second class. We refer to the...

  6. PART III. ROBUST MULTI-STAGE OPTIMIZATION
    • Chapter Thirteen Robust Markov Decision Processes
      (pp. 341-354)

      This chapter is devoted to a robust dynamical decision making problem involving a finite-state, finite-action stochastic system. The system’s dynamics is described by state transition probability distributions, which we assume to be uncertain and varying in a given uncertainty set. At each time period, nature is playing against the decision-maker, by picking at will transition distributions within their ranges. The goal of the robust decision making is to minimize the worst-case expected value of a given cost function, where “worst-case” is with respect to the considered class of policies of nature. We show that when the cardinalities of the (finite!)...

    • Chapter Fourteen Robust Adjustable Multistage Optimization
      (pp. 355-414)

      In this chapter we continue investigating robust multi-stage decision making processes started in chapter 13. Note that in the context of chapter 13, computational tractability of the robust counterparts stems primarily from the fact that both the state and the action spaces associated with the decision making process under consideration are finite with moderate cardinalities. These assumptions combine with the Markovian nature of the process to allow for solving the robust counterpart in a computationally efficient way by properly adapted Dynamic Programming techniques. In what follows we intend to consider multi-stage decision making in the situations where Dynamic Programming hardly...

  7. PART IV. SELECTED APPLICATIONS
    • Chapter Fifteen Selected Applications
      (pp. 417-446)

      We have considered already numerous examples illustrating applications of the Robust Optimization methodology, but these were, essentially, toy examples aimed primarily at clarifying particular RO techniques. In this chapter, we present a number of additional examples, with emphasis on potential and actual “real-life” aspects of Robust Optimization models in question. Many more examples can be found in the literature, see, e.g., [9, 16, 110, 89] and references therein.

      The application of RO to follow is from E. Stinstra and D. den Hertog [108], to whom we are greatly indebted for the permission to reproduce here part of their results.

      The...

    • Appendix A. Notation and Prerequisites
      (pp. 447-468)
    • Appendix B. Some Auxiliary Proofs
      (pp. 469-510)
    • Appendix C. Solutions to Selected Exercises
      (pp. 511-530)
  8. Bibliography
    (pp. 531-538)
  9. Index
    (pp. 539-542)