Optimization: Insights and Applications

Optimization: Insights and Applications: Insights and Applications

Jan Brinkhuis
Vladimir Tikhomirov
Copyright Date: August 2005
Pages: 676
https://www.jstor.org/stable/j.ctt7s71d
  • Cite this Item
  • Book Info
    Optimization: Insights and Applications
    Book Description:

    This self-contained textbook is an informal introduction to optimization through the use of numerous illustrations and applications. The focus is on analytically solving optimization problems with a finite number of continuous variables. In addition, the authors provide introductions to classical and modern numerical methods of optimization and to dynamic optimization.

    The book's overarching point is that most problems may be solved by the direct application of the theorems of Fermat, Lagrange, and Weierstrass. The authors show how the intuition for each of the theoretical results can be supported by simple geometric figures. They include numerous applications through the use of varied classical and practical problems. Even experts may find some of these applications truly surprising.

    A basic mathematical knowledge is sufficient to understand the topics covered in this book. More advanced readers, even experts, will be surprised to see how all main results can be grounded on the Fermat-Lagrange theorem. The book can be used for courses on continuous optimization, from introductory to advanced, for any field for which optimization is relevant.

    eISBN: 978-1-4008-2936-1
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-vi)
  2. Table of Contents
    (pp. vii-x)
  3. Preface
    (pp. xi-xxiv)
  4. Necessary Conditions: What Is the Point?
    (pp. 1-2)

    The aim of this introduction to optimization is to give an informal first impression of necessary conditions, our main topic.

    Exercises. Suppose you have just read for the first time about a new optimization method, say, the Lagrange multiplier method, or the method of putting the derivative equal to zero. At first sight, it might not make a great impression on you. So why not take an example: find positive numbers x and y with product 10 for which 3x + 4y is as small as possible. You try your luck with the multiplier rule, and after a minor struggle...

  5. Chapter One Fermat: One Variable without Constraints
    (pp. 3-84)

    One variable of optimization. The epigraph to this summary describes a view in upper-class circles in England at the beginning of the previous century. It is meant to surprise, going against the usual view that somewhere between too small and too large is the optimum, the “golden mean.” Many pragmatic problems lead to the search for the golden mean (or the optimal trade-off or the optimal compromise). For example, suppose you want to play a computer game and your video card does not allow you to have optimal quality (“high resolution screen”) as well as optimal performance (“flowing movements”); then...

  6. Chapter Two Fermat: Two or More Variables without Constraints
    (pp. 85-134)

    Two or more variables of optimization. Sometimes an optimal choice is described by a choice of two of more numbers. This allows many interesting economic models. For example, we will consider the problem of choosing the best location from which to serve three clients. We consider as well the optimal location for two competing ice-cream sellers on a beach. This is a simple problem, but it leads to a fruitful insight: “the invisible hand” (=hand of the market), one of the central ideas in economics, does not always work. Other models give insights into taxes: some taxes can lead to...

  7. Chapter Three Lagrange: Equality Constraints
    (pp. 135-198)

    Nous ne faisons ici qu’indiquer ces procédés dont il sera facile de faire l’application; mais on peut les réduire à ce principe général: Lorsqu’une fonction de plusieurs variables doit être un maximum ou minimum, et qu’il y a entre ces variables une ou plusieurs équations, il suffira d’ajouter à la fonction proposée les fonctions qui doivent être nulles, multipliées chacune par une quantité indétermin´ee, et de chercher ensuite le maximum ou minimum comme si les variables étaient indépendantes; les équations que l’on trouvera, combin´ees avec les équations données, serviront à déterminer toutes les inconnues. Here we only sketch these procedures...

  8. Chapter Four Inequality Constraints and Convexity
    (pp. 199-260)

    The need for inequality constraints. It is not always optimal to use up a budget completely. For example, in a problem of investment, it might be too risky to invest the entire capital. Such a situation can be modeled by an inequality constraint. Another example arises from the seemingly mysterious phenomenon of the discount on airline tickets with a Saturday stayover. This mystery has a simple explanation: price discrimination between the market for businesspeople and the market for tourists. Which discount is optimal for the airplane company? The discount should not be so big that it becomes attractive for businesspeople...

  9. Chapter Five Second Order Conditions
    (pp. 261-272)

    Second order conditions distinguish whether a stationary point is a local maximum or a local minimum. However, usually they are not very helpful for solving concrete optimization problems. The following insight is of some interest, though: they are essentially the only obstructions to local minimality besides the first order conditions. Smallest eigenvalue of a symmetric matrix. The main challenge to us was to test the second order condition for a minimization problem that has as stationary points the eigenvectors of a symmetric matrix and as global solution an eigenvector which belongs to the smallest eigenvalue. To our delight, the second...

  10. Chapter Six Basic Algorithms
    (pp. 273-324)

    Nonlinear optimization is difficult. The most important thing to realize is that it is in principle impossible to have an algorithm that can solve efficiently all nonlinear optimization problems. This is illustrated, on the one hand, by reformulating one of the greatest mathematical problems of all times, Fermat’s Last Theorem, as a nonlinear optimization problem. On the other hand, we use ballcoverings to prove that even the best algorithm to optimize a wellbehaved function on a block in n-dimensional space is inadequate.

    Linear optimization. This state of affairs is in sharp contrast with the situation for linear optimization problems, also...

  11. Chapter Seven Advanced Algorithms
    (pp. 325-362)

    Einstein quoted the words of Ludwig Boltzmann in the preface of his famous popular book on relativity theory. However, in reality both relativity theory and Einstein’s style of writing are very elegant. Maybe he interpreted these words as saying that elegance should not stand in the way of user-friendliness. This is how we read them. We present two beautiful and very successful methods, conjugate gradient methods and self-concordant barrier methods. The accounts can be read independently. We have tried to explain them in an analytical style, as simple and straightforward as we could. However, we do not provide elegant geometrical...

  12. Chapter Eight Economic Applications
    (pp. 363-390)

    There are many applications of optimization theory to economics. For mathematical applications, we hold those problems in the highest esteem that are the hardest to solve. For economic applications, the name of the game is different. To clarify the matter, we recall once more the words of Marshall: “A good mathematical theorem dealing with economic hypotheses is very unlikely to be good economics.” The success of an economic application is measured by the light it sheds on examples that are important in real life.

    We begin with some applications of the one-variable Fermat theorem.

    Auctions. Auctions lead to many optimization...

  13. Chapter Nine Mathematical Applications
    (pp. 391-416)

    In this chapter, we solve problems posed in different times applying the main methods of the general theory, instead of using the special methods of the original solutions.

    The next problem is included just for fun. All other problems considered in this chapter will be more: attempts to reach the essence of some phenomenon.

    Solution. These two numbers are almost equal. It is not hard to guess that $ {e^\pi } $ is slightly bigger than $ {\pi ^e} $ , by using a calculator. One of the examples of MatLab gives further evidence: it treats this problem by way of a visualization of the graph of...

  14. Chapter Ten Mixed Smooth-Convex Problems
    (pp. 417-440)

    How do the main theorems of the differential and convex calculus—the tangent space theorem and the supporting hyperplane theorem—cooperate to establish necessary and sufficient conditions for mixed smooth-convex problems? These are problems that are partly smooth and partly convex. The simplest example is the smooth programming problem $ \[{p_{4.1}}\] $ , where the convexity is the mere presence of inequality constraints and the smoothness is given by differentiability assumptions. For this type of problem we stated—but did not prove—the John theorem 4.13. Another example is optimal control problems—the latter type will be discussed briefly in chapter twelve. In...

  15. Chapter Eleven Dynamic Programming in Discrete Time
    (pp. 441-474)

    We consider multidecision problems, where a number of related decisions have to be taken sequentially. The problem to take the last decision in an optimal way is usually easy. Moreover, if we know how to solve the multidecision problem with r decisions still to be taken, then this can be used to give an easy solution of the multidecision problem with r+1 decisions still to be taken. The reason for this is a simple observation, the optimality principle: for an optimal chain of decisions, each end chain is also optimal. Here are three examples.

    1. Shortest route problem. How to...

  16. Chapter Twelve Dynamic Optimization in Continuous Time
    (pp. 475-502)

    Plan. We will give a brief historical introduction to the calculus of variations, optimal control, and dynamic programming.

    Euler. Galileo was the first natural philosopher of the Renaissance period. He had infinite-dimensional problems in mind. Our epigraph shows that he was thinking about the problem of the brachistochrone, the slide of quickest descent without friction. A modern application of the solution is to the design of slides in aqua-parks. Galileo did not solve this problem himself. Johann Bernoulli (1667–1748) settled the problem in 1696, maybe under the influence of Galileo. He showed that the optimal curve is a cycloid...

  17. Appendix A On Linear Algebra: Vector and Matrix Calculus
    (pp. 503-518)
  18. Appendix B On Real Analysis
    (pp. 519-536)
  19. Appendix C The Weierstrass Theorem on Existence of Global Solutions
    (pp. 537-546)
  20. Appendix D Crash Course on Problem Solving
    (pp. 547-552)
  21. Appendix E Crash Course on Optimization Theory: Geometrical Style
    (pp. 553-560)
  22. Appendix F Crash Course on Optimization Theory: Analytical Style
    (pp. 561-582)
  23. Appendix G Conditions of Extremum from Fermat to Pontryagin
    (pp. 583-600)
  24. Appendix H Solutions of Exercises of Chapters 1–4
    (pp. 601-644)
  25. Bibliography
    (pp. 645-650)
  26. Index
    (pp. 651-658)