Recent Advances in Global Optimization

Recent Advances in Global Optimization

Christodoulos A. Floudas
Panos M. Pardalos
Copyright Date: 1992
Pages: 644
https://www.jstor.org/stable/j.ctt7ztwft
  • Cite this Item
  • Book Info
    Recent Advances in Global Optimization
    Book Description:

    This book will present the papers delivered at the first U.S. conference devoted exclusively to global optimization and will thus provide valuable insights into the significant research on the topic that has been emerging during recent years. Held at Princeton University in May 1991, the conference brought together an interdisciplinary group of the most active developers of algorithms for global optimization in order to focus the attention of the mathematical programming community on the unsolved problems and diverse applications of this field. The main subjects addressed at the conference were advances in deterministic and stochastic methods for global optimization, parallel algorithms for global optimization problems, and applications of global optimization. Although global optimization is primarily a mathematical problem, it is relevant to several other disciplines, including computer science, applied mathematics, physical chemistry, molecular biology, statistics, physics, engineering, operations research, communication theory, and economics. Global optimization problems originate from a wide variety of mathematical models of real-world systems. Some of its applications are allocation and location problems and VLSI and data-base design problems.

    Originally published in 1991.

    ThePrinceton Legacy Libraryuses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.

    eISBN: 978-1-4008-6252-8
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-iv)
  2. Table of Contents
    (pp. v-viii)
  3. Preface
    (pp. ix-2)
    Christodoulos A. Roudas and Panos M. Pardalos
  4. On approximation algorithms for concave quadratic programming
    (pp. 3-18)
    Stephen A. Vavasis

    Quadratic programming is a nonlinear optimization problem of the following form:

    minimize$\frac{1} {2}{x^T}Hx + {h^T}x$

    subject to$Wx \ge b$. (1)

    In this formulation, x is then-vector of unknowns. The remaining variables stand for data in the problem instance:His ann x nsymmetric matrix, h is ann-vector,Wis anm x nmatrix, and b is anm-vector. The relation ‘$ \ge $’ in the constraint$Wx \ge b$b is the usual componentwise inequality.

    Quadratic programming, a generalization of linear programming, has applications in economics, planning, and many kinds of engineering design. In addition, more complicated kinds of nonlinear programming problems...

  5. A New Complexity Result on Minimization of a Quadratic Function with a Sphere Constraint
    (pp. 19-31)

    Given a matrix$Q \in {R^{nxn}}$and a vector$c \in {R^n}$, we consider the problem: Find an$x \in {R^n}$for which there exists$\mu \ge 0$such that

    (SQ)$(Q + \mu I)x = c,{\left\| {\left. x \right\|} \right.^2} = 1$, and$Q + \mu I$is PSD.

    Here, PSD denotes “positive semi-definite” and$\left\| {\left. . \right\|} \right.$designates the I₂ norm. This problem is popular since it essentially represents the optimality conditions for a sphere (or ellipsoid) constrained quadratic optimization problem

    SQO minimize$q(x) = {x^T}Qx/2 - {c^T}x$

    subject to$x \in S = \{ x \in {R^n}:{\left\| {\left. x \right\|} \right.^2} \le 1\} $

    The Levenberg-Marquardt trust region method for nonlinear programming (e.g., Dennis and Schnabel [3], Gay [4], Goldfeld et al. [5], Moré [10], and Sorensen [15] among others) is based on sequentially solving SQO problems. SQO also...

  6. Hamiltonian Cycles, Quadratic Programming, and Ranking of Extreme Points
    (pp. 32-49)
    Ming Chen and Jerzy A. Filar

    In this paper we consider the Hamiltonian Cycle Problem (HCP). It would be impractical to supply a complete bibliography of works on this problem, instead we refer the reader to the book of Papadimitriou and Steiglitz [5]. We begin with a brief description of only one version of the HCP. In graph theoretic terms, the problem is to find a simple cycle ofNarcs, that is a Hamiltonian cycle, in a directed graphGwithNnodes and with arcs (i,j) or determine that none exist.

    In this paper we follow the approach of Filar and Krass [2] to...

  7. Performance of Local Search In Minimum Concave-Cost Network Flow Problems
    (pp. 50-75)
    G.M. Guisewite and P.M. Pardalos

    Concave-cost network flow, uncapacitated, single source, global optimization, local optimization, parallel processing.

    The single-source uncapacitated (SSU) version of the minimum concavecost network flow problem (MCNFP) requires establishing a minimum cost flow from a single generating source to a set of sinks, through a directed network. All arcs are uncapacitated, indicating that the entire source flow can pass through any arc. The SSU MCNFP can be stated formally as follows:

    Given a directed network$G = ({N_G},{A_G})$consisting of a set${N_G}$ofnnodes and a set${A_G}$of m ordered pairs of distinct nodes called arcs, coupled with an-vector (demand...

  8. SOLUTION OF THE CONCAVE LINEAR COMPLEMENTARITY PROBLEM
    (pp. 76-101)
    Joaquim J. Júdice and Ana M. Faustino

    Linear Complementarity Problem, Global Concave Quadratic Optimization, Polynomial Algorithms, Enumerative Methods, Classes of Matrices. The Linear Complementarity Problem (LCP) consists of finding vectors$z \in {R^n}$and$w \in {R^n}$such that

    $w = q + Mz,z \ge 0,w \ge 0,{z^T}w = 0$(1)

    where$q \in {R^n}$and M is an n by n real square matrix. This nonlinear optimization problem has received much interest during the past twenty years. Many direct and iterative algorithms have been developed for the solution of the LCP. These algorithms can only process the LCP (find a solution or show that no solution exists) when the matrix M satisfies certain properties. We suggest Murty’s book [35] for an excellent...

  9. Global Solvability of Generalized Linear Complementarity Problems and a Related Class of Polynomial Complementarity Problems
    (pp. 102-124)
    Aniekan A. Ebiefung and Michael M. Kostreva

    The Generalized Linear Complementarity Problem of Cottle and Dantzig (Ref. 1) is: Given an$mxn,m \ge n$, vertical block matrix N, of type$({m_1},..,{m_n})$and q in${R^m}$, find$z \in {R^n}$and$w \in {R^m}$, such thai

    GLCP(q,N):$w = Nz + q,w \ge 0,z \ge 0.{z_j}\prod\limits_{i = 1}^{{m_j}} {w_i^j} = 0(j = 1,...,n).$

    Mathematical Sciences Department, Clemson University, Clemson, SC 29634 -1907. That GLCP(q, N) has a solution when N is strictly copositive or a P-matrix was shown by Cottle and Dantzig (Ref. 1). More recent work on the GLCP(q, N) can be found in (Ref. 2) where P-matrices are characterized. A useful engineering application of the Generalized Complementarity Problem is given by Oh in (Ref. 3).

    After a...

  10. A Continuous Approach to Compute Upper Bounds in Quadratic Maximization Problems With Integer Constraints
    (pp. 125-140)
    A. Kamath and N. Karmarkar

    In the first section we introduce the quadratic optimization problem and give a motivation for the concepts underlying the development of the interior point approach for solving the problem. In the second section we describe the method of generating a sequence of decreasing upper bounds for the quadratic maximization problem and then introduce a minor simplification of the technique. In the third section we show how the approach has been used to find lower bounds on the cutsizes for the graph partitioning problem [4] and present a few implementation details. Conclusions and directions for future work are presented in the...

  11. A Class of Global Optimization Problems Solvable by Sequential Unconstrained Convex Minimization
    (pp. 141-151)
    Hoang Tuy and Faiz A. Al-Khayyal

    In a recent paper, Idrissi et al. [5] considered the following class of global optimization problems for the casem= 2

    $\mathop {\max }\limits_{x \in {R^m}} \sum\limits_{j = 1}^n {{q_j}} \left[ {{h_j}} \right.\left. {(x)} \right]$(P)

    under the assumptions that: (A1)${q_j}$is a strictly convex function, strictly decreasing onR+, with values inR+, for which Iim$_{t \to \infty }{q_j}(t) = 0$for all j; and (A2) hjis a convex function defined on Rm, with values inR+, such that Iim$_{\left| {\left. x \right| \to \infty } \right.}{h_j}(x) = \infty $for allj, where$\left| {\left. . \right|} \right.$denotes an arbitrary norm.

    The objective function$\varphi (x): = \sum\nolimits_j^n { = 1{q_j}} \left[ {{h_j}(x)} \right]$is gener ally neither convex nor concave. In [5], a method that computes only a local (instead of global) maximum...

  12. A New Cutting Plane Algorithm for a Class of Reverse Convex 0-1 Integer Programs
    (pp. 152-164)
    Sihem BenSaad

    In this paper, the following optimization problem is considered:

    min$_{x \in {R^n}}{c^T}x$

    such that$Ax \le b$

    $g(x) \le 0$

    whereAis anm x nmatrix,$m \ge n$, andbis anm-vector, andgis a concave function of Rn. All variables are real and all functions take on real values. The last constraint is called “reverse convex” following a paper by Meyer[9], and refers to the fact that the set$\{ x \in {R^n}\left| {g(x)} \right. \le 0\} $is the closure of the complement of a convex set. This problem is quite general since it encompasses the minimization of a cancave function over a polytope, 0-1 linear optimization problems, some...

  13. Global Optimization of Problems with Polynomial Functions in One Variable
    (pp. 165-199)
    V. Visweswaran and C. A. Floudas

    Polynomial functions of one variable occur frequently in mathematical programming problems. Problems involving the unconstrained or constrained optimization of these functions are interesting not only because of the inherent simplicity of the problem structure, but also because these functions form the backbone of larger optimization problems involving more variables. Often, the solution of these larger problems becomes much easier if a few of the variables are fixed. Consequently, they can be viewed as parametric problems in one variable. The solution of optimization problems involving one (or a few) variable(s) can often provide significant insight into the nature of larger problems....

  14. One Dimensional Global Optimization Using Linear Lower Bounds
    (pp. 200-220)
    Matthew Bromberg and Tsu-Shuan Chang

    Because so many problems can be formulated as optimization problems, much effort has been spent searching for good optimization techniques. Unfortunately most algorithms can only find local minima, and once a local minimum is reached, a vanishing gradient can not distinguish a local minimum from a global one. In recent years there has been some progress in finding solution techniques for this difficult problem.

    One concept frequently used in global optimization is the covering method. The basic idea is to detect and throw away subregions not containing the global minimum until the remaining set is small and is known to...

  15. Optimizing the Sum of Linear Fractional Functions
    (pp. 221-258)
    James E. Falk and Susan W. Palocsay

    The term “fractional program” refers to a nonlinear program which involves the optimization of a ratio of functions. These ratio problems appeared in the literature as early as 1956 when Isbell and Marlow considered the strategic military problem of deciding on a distribution of fire over enemy targets of several types. The formulation of this minimax problem led them to develop an iterative process for optimizing a linear fractional function subject to linear constraints. In 1962, the classical paper in the field by Charnes and Cooper appeared. They showed that any linear fractional programming problem can be replaced with at...

  16. Minimizing and Maximizing the Product of Linear Fractional Functions
    (pp. 259-273)
    Hiroshi Konno and Yasutoshi Yajima

    In a series of articles [8, 9, 10, 16], we proposed parametric simplex algorithms for globally minimizing a class of quasi-linear nonconvex functions over a polytope. This class of problems includes, among others the following problems:

    i) linear multiplicative programming problems:

    (LMP) minimize$\{ c_0^tx + c_1^t.c_2^tx\left| {x \in X\} } \right.$, ii) minimization of a sum of two linear fractional functions:

    (LFP) minimize$\{ \frac{{d_1^tx + {d_{10}}}}{{c_1^tx + {c_{10}}}} + \frac{{d_2^tx{d_{20}}}}{{c_2^tx + {c_{20}}}}\left| {x \in x\} } \right.$,

    iii) bilinear programming problems:

    (BLP) minimize$\{ c_0^tx + d_0^ty + \sum\limits_{j = 1}^k {c_j^tx.d_j^ty\left| {x \in X,y \in Y\} } \right.} $,

    where$XC{R^n},YC{R^n}$are polytopes.

    We demonstrated through numerical experiments on a number of randomly generated problems that certain variants of our parametric simplex algorithm work very well for these class of problems. In particular, we showed that

    (a)...

  17. Numerical Methods for Global Optimization
    (pp. 274-297)
    Yu. G. Evtushenko, M.A. Potapov and V.V. Korotkich

    We consider the global optimization problem

    $f* = \mathop {\min }\limits_{x \in X} f(x)$, (1.1)

    where$f:{R^n} \to {R^1}$is a continuous real valued objective function and X C Rnis a compact feasible set.

    As a special case, we consider the situation whereXis a right parallelepipedPwith sides parallel to the coordinate axes (a box in the sequel):$P = \{ x \in {R^n}:a \le x \le b,a \in {R^n},b \in {R^n}\} .$(1.2)

    Here and below, the vector inequality$a \le b$, where$a,b \in {R^n}$, means that the componentwise inequalities${a^i} \le {b^i}$hold for alli= 1 , . . . ,n.

    The set of global minimum points of the functionf(solution set) and the set ofe-optimal solutions are...

  18. Integral Global Optimization of Constrained Problems in Functional Spaces with Discontinuous Penalty Functions
    (pp. 298-320)
    Quan Zheng and Deming Zhuang

    LetUbe a topological space,Sa subset ofUandJa real-valued function onU. The problem considered here is to find the infimum

    ${c^*} = \mathop {\inf }\limits_{u \in S} J(u)$(1) and the set of global minima:

    ${H^*} = \{ u \in S|J(u) = {c^*}\} $. (2)

    Under the assumption

    (A):Jis lower semi-continuous,Sis closed and there is a real numberbsuch that the set${H^*} = \{ u \in S\left| J \right.(u) = {c^*}\} $is a nonempty compact set,

    the setH* is nonempty.

    Problems from calculus of variations and optimal control, as well as differential games, require one to consider the case that the underlying spaceUis infinite-dimensional. But, in general, it...

  19. Rigorous Methods for Global Optimization
    (pp. 321-342)
    Ramon Moore, Eldon Hansen and Anthony Leclerc

    A very active area of research concerns methods for global nonlinear optimization. A number of new international journals on the subject are planned or have just begun [7, 13]. Existing journals are publishing special issues on global optimization [8, 9].

    It has long been held that rigorous global optimization is impossible in the nonlinear case, and this is probably true using only evaluations of functions at points. At the very least we need to be able to compute upper and lower bounds on the ranges of functions over sets. Interval arithmetic, for example, provides upper and lower bounds on ranges...

  20. Global Optimization of Composite Laminates Using Improving Hit and Run
    (pp. 343-368)
    Zelda B. Zabinsky, Douglas L. Graesser, Mark E. Tuttle and Gun-In Kim

    In recent years random search algorithms have been used during optimization studies associated with a number of disciplines, including image processing, biology, physics and chemistry. A relatively new application area is the optimal design of laminated composite materials [6,7,9,10,11]. Laminated composites are composed of several thin layers called plies, which are bonded together to form a composite laminate. A single ply consists of long reinforcing fibers (e.g., graphite fibers), embedded within a relatively weak matrix material (e.g., epoxy). All fibers within an individual ply are oriented in one direction. Composite laminates are usually fabricated such that fiber angles vary from...

  21. Stochastic Minimization of Lipschitz Functions
    (pp. 369-383)
    Regina Hunter Mladineo

    This paper presents a stochastic modification of a deterministic global optimization algorithm. The introduction of stochastic elements into the algorithm, which in original form [3] converges with relatively few function evaluations, is for the purpose of decreasing overhead (computational effort in each iteration excluding function evaluations).

    The problem we consider is:

    Min f subject to$x \in {I^N}$

    where${I^N} = \{ x = ({x^1},...{x^N})\left| {0 \le {x^i}} \right. \le 1\} $

    We assume furthermore that f satisfies the Lipschitz condition that, for all x, y in IN,

    $\left| {f(x) - f(y)\left| { \le L\left\| {x,\left. y \right\|} \right.} \right.} \right.$

    L a real positive constant,$\left\| {\left. {} \right\|} \right.$Euclidean distance.

    The deterministic algorithm previously developed by the author has been tested and results have been published [3,...

  22. Topographical Global Optimization
    (pp. 384-398)
    Aimo Torn and Sami Viitanen

    Global optimization, topography graph, parallel algorithms.

    The global optimization problem considered here is characterized by the following. Letf(x) be a function fromACRntoR. It is assumed that the problem is essentially unconstrained, i.e., that the global minimum offis attained in the interior ofAand has a basin with positive measure. NormallyAwould be a hyperrectangle indicating the region of interest but anyAwould do as long asAcan be subscribed by a hyperrectangleHand that it is reasonably probable that a point sampled at random inHwill...

  23. Lipschitzian Global Optimization: Some Prospective Applications
    (pp. 399-432)
    János Pintér

    In recent years, a broad spectrum of strategies has been proposed to solve theglobal optimization problem(COP), stated in the general form

    (1.1)$\mathop {\min f\left( x \right)}\limits_{X \in Dc{R^n}} $

    under its diverse (further) specifications. The monographs and surveys provided by Dixon and Szegö. eds. (1975, 1978), Strongin (1978), Fedorov, ed. (1985), Pardalos and Rosen (1987), Ratschek and Rokne (1988), Törn and Žilinskas (1989), Horst and Tuy (1990) and the numerous references therein describe a great variety of GO approaches. These differ considerably with respect to their priormodelsof the GOP, thesearch/optimality criteriaapplied and the corresponding solution algorithms derived.

    In this paper...

  24. Packet Annealing: A Deterministic Method for Global Minimization. Application to Molecular Conformation
    (pp. 433-477)
    David Shalloway

    The unnormalized Gibbs distribution at temperature T for a system governed by objective (energy) function H(R) over a continuous domain parametrized by R

    ${p_T}(R) = {e^{ - H(R)/{K_B}T}}$

    (kB ≡ Boltzmann’s constant)

    converges at sufficiently low temperature${T_{l0}}$, to the simple form¹

    $\mathop \approx \limits_{{p_{Tl0}}} (R) \approx {e^{ - \left| {(R - R_B^*)/{A_{l0}}\left| {^2} \right.} \right.}}$where the density is concentrated near the global minimum R*g. The simulated annealing algorithm (Kirkpatrick, 1983), when applied to continuous systems (e.g., Welle, 1986), can be viewed as a method for stochastically tracing this convergence by Monte Carlo simulation. Its success depends on the existence of a (non-linear) correlation between T and the size scale A(T) of the Monte Carlo steps...

  25. Mixed-Integer Linear Programming Reformulations for Some Nonlinear Discrete Design Optimization Problems
    (pp. 478-512)
    I.E. Grossmann, V.T. Voudouris and O. Ghattas

    Many problems in engineering design give rise to nonconvex nonlinear programming (NLP) problems (e.g. see Floudas and Pardalos, 1990). Furthermore, quite often due to manufacturing constraints, design variables are restricted to take discrete values for selecting standard sizes which gives rise to mixed-integer nonlinear programs (MINLP) (e.g. see Papalambros and Wilde, 1988; Grossmann, 1990). These problems in many cases have a continuous relaxation that corresponds to a nonconvex NLP. Due to the difficulty in solving these problems, many design models reported in the literature have assumed continuous sizes, and used ad-hoc rounding procedures. It is the purpose of this paper,...

  26. Mixed-Integer Nonlinear Programming on Generalized Networks
    (pp. 513-542)
    Soren S. Nielsen and Stavros A. Zeniost

    Network problems constitute an important and widely applicable class of mathematical programming problems. Numerous applications and very efficient solution procedures have long been known for linear, pure or generalized, networks (Kennington and Helgason [1980]). More recently, the development of efficient solution algorithms for non-linear networks have further broadened the scope and applicability of network formulations. The success of network algorithms can primarily be attributed to the special basis structure and sparsity that is characteristic of these problems. This, in turn, facilitates the solution of realistic problems, which are often very large, within a reasonable time frame. Applications today include transportation...

  27. Global Minima in Root Finding
    (pp. 543-560)
    Angelo Lucia and Jinxian Xu

    Trust region (or dogleg) methods for solving nonlinear algebraic equations were introduced by M.J.D. Powell in 1970 and have been very popular in a variety of disciplines over the last twenty years due to their global convergence properties. However, what is not made clear, either in the original paper by Powell or in any of the many modifications that have appeared since, is that the dogleg strategy can converge to a nonzero-valued local minimum of the least squares objective function, which does not correspond to a root, but rather a singular point, of the function under consideration. This is true...

  28. HOMOTOPY-CONTINUATION ALGORITHM FOR GLOBAL OPTIMIZATION
    (pp. 561-592)
    Amy C. Sun and Warren D. Seider

    In chemical processes, nonlinear programs (NLPs) arise in the models for many components of the process flowsheet It is usually important, although difficult, to locate all of their stationary points, and especially the global minimum. Often the NLP is for the minimization of the Gibbs free energy to calculate the composition of a process stream in phase and chemical equilibrium, an especially difficult problem when the phase distribution at equilibrium is uncertain. Other NLPs involve the minimization of the annualized cost, often for chemical reactors that exhibit multiple steady states. Tlie sources of the nonlinearities and solution methods for these...

  29. SPACE-COVERING APPROACH AND MODIFIED FRANK-WOLFE ALGORITHM FOR OPTIMAL NUCLEAR REACTOR RELOAD DESIGN
    (pp. 593-615)
    Zhian Li, P. M. Pardalos and S. H. Levine

    Tlie safe and economic operation of a nuclear power reaclor is directly relaled Io the nuclear reactor reload design. Therefore, (he goal of nuclear reac lor fuel management has been to make nuclear rcaclor fuel reload design optimal and automatic. Attending this goal involves a few sequential optimal calculations. Among them, the first step is to obtain the optimal Fnd-Of-CycIe(COC) statek∞ distribution.

    The primary objective of an optimal nuclear rcaclor reload design is to minimize the nuclear reaclor fuel inventory and/or the cost, while satisfying the energy production and the safety requirement imposed on the reactor operation. Many studies...

  30. A GLOBAL OPTIMIZATION APPROACH TO SOFTWARE TESTING
    (pp. 616-633)
    Francesco Zirilli

    In today software industry the cost of the software production is highly dependent on software testing. That is the activity of testing that the software modules produced actually perform the functions that they are supposed to perform. Despite its great importance software testing is a mainly empirical activity [1]. In a more rational approach we can divide the software testing activity in two branches: the structural and the functional testing [2], The structural testing is independent of the code and consists of checking the flow-chart of the software modules; the functional testing is dependent on the code and consists of...