Proceedings of the Princeton Symposium on Mathematical Programming

Proceedings of the Princeton Symposium on Mathematical Programming

Edited by HAROLD W. KUHN
Copyright Date: 1970
Pages: 628
https://www.jstor.org/stable/j.ctt13x0wct
  • Cite this Item
  • Book Info
    Proceedings of the Princeton Symposium on Mathematical Programming
    Book Description:

    This volume contains thirty-three selected general research papers devoted to the theory and application of the mathematics of constrained optimization, including linear programming and its extensions to convex programming, general nonlinear programming, integer programming, and programming under uncertainty.

    Originally published in 1971.

    ThePrinceton Legacy Libraryuses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.

    eISBN: 978-1-4008-6993-0
    Subjects: Mathematics

Table of Contents

  1. Front Matter
    (pp. i-ii)
  2. PREFACE
    (pp. iii-iii)
    H. W. Kuhn
  3. Table of Contents
    (pp. iv-vi)
  4. PART I. LARGE SCALE SYSTEMS
    • TWO METHODS OF DECOMPOSITION FOR LINEAR PROGRAMMING
      (pp. 1-24)
      J. Abadie and M. Sakarovitch

      Consider a linear program having the following structure:

      where x is an n1-vector, y is an n2-vector, ..., z is an nt-vector

      p is an m1-vector, q is an m2-vector, ..., r is an mt-vector

      h is an m-vector

      P is an m1× n1matrix, Q is an m2× n2matrix, ..., R is an mt× ntmatrix

      A is an m × n1matrix, B is an m × n2matrix, ..., C is an m × ntmatrix

      c ia an n1row-vector, d is an n2row-vector, ..., e is an ntrow-vector.

      Such...

    • MATRIX GENERATORS AND OUTPUT ANALYZERS
      (pp. 25-36)
      E. M. L. Beale

      The term “matrix generator” refers to the use of a computer to help with the task of assembling the data for a mathematical programming problem. Many people seem to be in favor of matrix generators without having a very precise idea of what they can be expected to do: and this is hardly surprising, because those who have been most active in developing matrix generators do not entirely agree about it. Parts of this paper must therefore be controversial, but I do not want to stress these. Other aspects of matrix generators are discussed at this symposium by Minns (1967)...

    • DECOMPOSITION APPROACHES FOR SOLVING LINKED PROGRAMS
      (pp. 37-50)
      R. H. Cobb and J. Cord

      We are concerned with linear programs that have the following special structure:

      min z = C1X1+ d1y1+ C2X2+ d2y2+ . . . + dT-1yT-1+ CTXT

      subject to

      The ct, dt, bt, xtand ytare appropriately dimensioned vectors. the Atand mt× ntdimensional matrices (mt≤ nt). Problems with this structure commonly arise in industry and govenment when activities are planned over time. The ytvectors tie adjacent time periods together and usually represent inventories, yt. may be less than mtin dimensions, as shown in the example in...

    • LARGE SCALE SYSTEMS AND THE COMPUTER REVOLUTION
      (pp. 51-72)
      G. B. Dantzig

      From its very inception, it was envisioned that linear programming would be applied to very large, detailed models of economic and military systems. Kantorovitch’s 1939 proposals, which were before the advent of the electronic computer, mentioned such possibilities [78]. Linear programming evolved out of the U.S. Air Force interest in 1947 in finding optimal time-staged deployment plans in case of war [126]; a problem whose mathematical structure is similar to that of finding an optimal growth pattern of a developing economy and similar to other control problems [41], [58], [123]. Structurally the dynamic problems are characterized in discrete form by...

  5. PART II. PROGRAMMING UNDER UNCERTAINTY
    • STOCHASTIC GEOMETRIC PROGRAMMING
      (pp. 73-92)
      M. Avriel and D. J. Wilde

      In this paper we consider the problem of optimal engineering design by geometric programming when some of the parameters (data) of the problem are random variables. Geometric programming was originally developed to optimize engineering design of processes or units [2]. This work generalizes the method to the case of possible randomness in the process operating conditions or the unit costs.

      Standard methods of mathematical programming, e. g. , the simplex method of linear programming, cannot handle such a case since they can provide an optimal solution only for one specified set of values of the parameters. It is apparent therefore...

    • THE CURRENT STATE OF CHANCE-CONSTRAINED PROGRAMMING
      (pp. 93-112)
      M. J. L. Kirby

      In this paper we present a survey of chance-constrained programming results and applications. We begin with a detailed discussion of the basic concepts involved in chance-constrained programming. This discussion is followed by a summary of results which have been established for a variety of chance-constrained models. We then turn to a survey of applications. Here the applicability of chance-constrained models to problems in financial planning and competitive decision making is emphasized.

      The general chance-constrained model which we are going to consider can be expressed as

      max f(c, X)

      subject to P(AX ≤ b) ≥ α , (1)

      where P means...

    • ON PROBABILISTIC CONSTRAINED PROGRAMMING
      (pp. 113-138)
      A. Prekopa

      The term probabilistic constrained programming means the same as chance constrained programming, i. e. , optimization of a function subject to certain conditions where at least one is formulated so that a condition, involving random variables, should hold with a prescribed probability. The probability is usually not prescribed exactly but a lower bound is given instead which is in practice near unity.

      The formulation of problem in connection with a stochastic system follows a line where first we consider the nonstochastic case, formulate the problem which loses its meaning as soon as certain parameters become random. Then the next step...

    • STOCHASTIC PROGRAMS WITH RECOURSE: SPECIAL FORMS
      (pp. 139-162)
      D. W. Walkup and R. J. B. Wets

      This paper is a sequel to Stochastic Programs with Recourse [20] in which we defined stochastic programs with recourse and developed some of their theoretical properties. In this paper we consider some special forms of stochastic programs with recourse, which, because they are less general, may prove to be more amenable to computational solution. We also show how certain problems studied by others, including the active approach of G. Tintner [15, 17] and the conditional probability model of chance constrained programming treated by A. Charnes and M. Kirbyin [6, 7], can be represented as stochastic programs with recourse.

      In [20]...

    • NONLINEAR ACTIVITY ANALYSIS AND DUALITY
      (pp. 163-178)
      A. C. Williams

      The duality theory of R. T. Rockafellar [4] is applied to some simple nonlinear cases of activity analysis. We suppose that the technology is linear and additive, but that the demands for (and perhaps also supplies of) product are not explicitly specified; i. e. , they are variable and carry some cost. A special case of this model, a case in which demands are random variables is treated in some detail.

      A main purpose of this paper is to bring out the economic interpretation of the notion of conjugate function and, more generally, of the duality theory which has been...

  6. PART III. INTEGER PROGRAMMING
    • DUALITY IN DISCRETE PROGRAMMING
      (pp. 179-198)
      E. Balas

      We shall write the mixed-integer programming problem in the form I:

      Given c = (cj), A = (aij) and b = (bi), i ε M = {1, . . . , m}, j ε N = {1, . . . , n}, find vectors x = (xj), y = (yi), i ε M, j ε N, and

      max cx

      subject Ax + y = b

      x, y ≥ 0

      (I) xjinteger,$j\varepsilon {N_1} \subset N$

      Let$x = \left( {{}_{{x^2}}^{{x^1}}} \right)$, where xjis a component of x1if if j ε N1, and a components of x2, if j ε N - N1.

      Further, let...

    • INTEGER PROGRAMMING: METHODS, USES, COMPUTATION
      (pp. 199-266)
      M. L. Balinski

      This paper attempts to present the major methods, successful or interesting uses, and computational experience relating to integer or discrete programming problems. Included are descriptions of general algorithms for solving linear programs in integers, as well as some special purpose algorithms for use on highly structured problems. This reflects a belief, on the author's part, that various clever methods of enumeration and other specialized appoaches are the most efficacious means existent by which to obtain solutions to practical problems. A serious try at gathering computational experience has been made - but facts are difficult to uncover.

      The paper is written...

    • ON RECENT DEVELOPMENTS IN INTEGER PROGRAMMING
      (pp. 267-302)
      M. L. Balinski

      Since publication in 1965 of the survey paper reprinted above a considerable research effort has been carried on by many people working in different fields and with diverse needs to develop solution procedures for solving integer programs, that is, linear programs in which some subset of the variables are required to be integer valued. The importance of having methods for solving a great many practical decision problems is, of course, well documented (see above and bibliography below). The aim of this brief review is threefold: (i) to bring up to date, in summary fashion, the 1965 survey; (ii) perhaps more...

    • ON MAXIMUM MATCHING, MINIMUM COVERING AND THEIR CONNECTIONS
      (pp. 303-312)
      M. L. Balinski

      Considerable work has been devoted to the study of two almost equivalent problems of finding “a maximum matching of nodes by edges” and a “minimum covering of nodes by edges” for a given graph (see references). On one hand, necessary and sufficient conditions have been given for a matching or a covering to be optimal ([5], [9]); on another, these conditions have been used to develop “good combinatorial algorithms” ([3]) for solving these problems ([2], [7], [10]); and, on still another, the polytope whose extreme points are the matchings has been established ([1], [6]).

      There appears to have been little...

    • PROGRAMMES MATHEMATIQUES NONLINEARES A VARIABLES BIVALENTES
      (pp. 313-322)
      P. Huard

      Les méthodes de résolution des programmes mathématiques á variables entiéres sont actuellement assez nombreuses et de types divers. Beaucoup d’entre elles cependant, et particuliérement celles qui concernent les problémes á variables 0 - 1 ou variables bivalentes, consistent en des dénombrements partiels, c’est-á-dire en une exploration dirigée de l’ensemble des solutions.

      Si théoriquement de tels algorithmes sont nécessairement finis, du fait que l’ensemble des solutions entiéres envisageables est fini, l’efficacité pratique de ces méthodes dépend essentiellement du choix du critére d’exploration. Il ne semble pas exister á l’heure actuelle de critére efficace vis-á-vis de tous Ies types de problémes. En...

    • ENUMERATION ALGORITHMS FOR THE PURE AND MIXED INTEGER PROGRAMMING PROBLEM
      (pp. 323-338)
      G. Zoutendijk

      The problem we are interested in in this paper is the so - called mixed integer linear programming problem of the 0 - 1 type: determine

      ${W_O}* = \left\{ {{p^T}x + {q^{Tw}}|Ax + Bw \le b,x \ge 0,{w_j} = 0or1} \right\}$(1)

      For this type of problem two different methods have been developed:

      (1) Dual decomposition [3] in which a sequence of linear programming problems and a sequence of “pure” integer problems have to be solved.

      (2) Branch and bound methods [6], [4] which use enumeration (tree searching) starting from the linear programming

      solution of (1) with the integer requirements replaced by$0 \le {W_j} \le 1.$

      Some experiments by Mr. G. Lofstrand of...

  7. PART IV. ALGORITHMS
    • ON NEWTON’S METHOD IN NONLINEAR PROGRAMMING
      (pp. 339-352)
      A. Ben-Israel

      The problem of solving: f(x) = 0 is a special case of solving: f(x) å C where C is a closed convex set. The least squares versionsof these problems are respectively: Min || f(x)||² and Min d²(f(x), C), d( , ) being the distance.

      Iterative methods for the solution of nonlinear least squares problems were extended in [3] to nonlinear problems over convex sets, and applied in particular to linear and nonlinear inequalities. Linearization plays a dual role in this approach: Convex sets are linearized in the sense that their projections are treated as perpendicular projections on supporting hyperplanes, and...

    • EXTENDING NEWTON’S METHOD TO SYSTEMS OF LINEAR INEQUALITIES
      (pp. 353-358)
      H. D. Mills

      Let f be an ç vector real analytic function of a real n vector x with values

      f(x) = (fi(x)) , (1)

      and Jf be the Jacobian of f with values

      $Jf(x) = (\frac{{\partial {f_i}(x)}}{{\partial {x_j}}})$(2)

      Consider the equation

      f(x) = 0. (3)

      It is well known [1] that if Jf(x) is nonsingular in a neighborhood of a solution x° of (3), then Newton’s method, defined by the transformation

      Nx = x - Jf(x)-1f(x), (4)

      converges quadratically to x°; i.e., there exists a neighborhood R of x° and constant A such that

      |Nx - x°|≤ A|x - x°|2, X ε R (5)...

    • INTERIOR POINT METHODS FOR THE SOLUTION OF MATHEMATICAL PROGRAMMING PROBLEMS
      (pp. 359-366)
      J. D. Roode

      We shall be concerned with the problem of maximizing a function f(x) of the vector x ε Enover a region $R \subset {E_n},$ where$R = \left\{ {X|{f_i}\left( X \right) \le 0,i \in {I_1};{f_i}\left( X \right) = 0,i \in {I_2}} \right\}$and I₁ and I₂ are some finite index sets.

      For this problem we shall discuss a certain class of methods, viz. interior point methods. In an interior point method a sequence of feasible points$\left\{ {{X^K}} \right\}$is constructed such that for all$k,{f_i}\left( {{X^K}} \right)\langle 0$if$i \in {I_3},{I_3} \subset {I_{1,}}$where for$i \in {I_3},{f_i}\left( x \right)$is nonlinear, and any point of accumulation of the sequence solves the stated problem—for nonconvex problems we shall usually have to be satisfied with a constrained...

  8. PART V. APPLICATIONS
    • THE SOLUTION OF A FIXED CHARGE LINEAR PROGRAMMING PROBLEM
      (pp. 367-376)
      P. Bod

      It is a known fact that in a developing economy capital goods must generally be considered a rather limited resource. This circumstance frequently justifies the endeavour to carry out at minimum investment costs certain development projects for which several alternative variants exist, especially so in the case of the nonproductive investment projects of the economy.

      The mathematical formulation of decision problems of this character has led us to a special fixed-charge linear programming problem. The speciality of the problem consists in the fact that the objective function contains only fixed costs and no variable ones.

      In the literature of mathematical...

    • ON CLASSES OF CONVEX AND PREEMPTIVE NUCLEI FOR N-PERSON GAMES
      (pp. 377-390)
      A. Charnes and K. Kortanek

      In our earlier paper, [3], we developed new connections between the concepts of cores and balanced sets of n-person games as introduced by Shapley in [6] and Peleg in [4] and the duality theory of linear programming. We also obtained a characterization of the core of a game by linear programming and answered in the affirmative, a conjecture set forth by Shapley on the sharpness of proper minimal balanced collections. Recently Schmeidler, [5], developed another interesting nation for a solution concept, that of the nucleolus of an n-person game. By using elementary topology Schmeidler proves existence and uniqueness of a...

    • WHEN IS A TEAM “MATHEMATICALLY” ELIMINATED?
      (pp. 391-402)
      A. J. Hoffman and T. J. Rivlin

      Our purpose is to give a different proof of a result of Schwartz [2] and generalize it. The setting for our problem is a league of n teams. In the course of a season each team plays m games with every other team. Each game results in one of the contesting teams winning and the other losing. At the end of the season the teams are arranged, in inverse order, according to the number of games they have won, in places 1, . . . , n, and the team (or teams) which has won the most games wins (or...

    • ON THE EXISTENCE OF OPTIMAL DEVELOPMENT PLANS
      (pp. 403-428)
      D. McFadden

      A plan for economic development is a description of the production activities required of each firm and the commodity vectors assigned to each supplier of resources and consumer unit, over the lifetime of an economy. The objective of development planning is to choose from the set of feasible plans one that is “best” in terms of the planner’s imputation of the society’s welfare. In practical applications, development plans are usually to maximize an objective function over a finite horizon, subject to terminal conditions. However, the terminal conditions are derivable in principle from an optimization beyond the finite horizon of the...

  9. PART VI. THEORY
    • OPTIMALITY AND DUALITY IN NONLINEAR PROGRAMMING
      (pp. 429-444)
      O. L. Mangasarian

      Most of the sufficient optimality criteria and duality results in nonlinear programming are based on the classical paper of Kuhn and Tucker [11]. All the sufficient optimality conditions of Kuhn and Tucker require the convexity of the constraint function g(x) appearing in the inequality constraint$g(x) \le 0$. Thus in order to handle an equality constraint h(x) = 0, it is first replaced by the two inequalities$h(x) \le 0$and$ - h(x) \le 0$and then it is required that h(x) be both convex and concave, and Hence linear. The linearity of equality constraints has then been almost a universal restriction in all sufficient optimality and...

    • GEOMETRIC PROGRAMMING: DUALITY IN QUADRATIC PROGRAMMING AND lp-APPROXIMATION I
      (pp. 445-480)
      E. L. Peterson and J. G. Ecker

      Duality theories have played a significant role in the development of mathematical programming. Generally, a duality theory establishes certain relationships between a given optimization problem, called the primal program, and a related optimization problem, called the dual program. An example of such a duality theory is found in linear programming (Gale, Kuhn, and Tucker [9], Dantzig and Orden [2], and Goldman and Tucker [10]). Another well-known example is given by the more general duality theory for linearlyconstrained quadratic programs (Dennis [3], Dorn [4, 5], Wolfe [22], Hanson [11], Mangasarian [13], Huard [12], and Cottle [l ] . A rather complete...

    • CONJUGATE CONVEX FUNCTIONS IN NONLINEAR PROGRAMMING
      (pp. 481-486)
      R. T. Rockafellar

      Let${f_{0,}}{f_1},...,{f_m}$be real-valued functions which a r e convex, but not necessarily differentiable, and consider the convex program in which${f_0}\left( x \right)$is minimized subject to the constraints${f_1}\left( x \right) \le 0,...,{f_m}\left( x \right) \le 0.$Real numbers${\lambda _1},...,{\lambda _m}$are called Lagrange multipliers for the program if they are nonnegative and the (unconstrained) infimum of the convex function${f_0} + {\lambda _1}{f_1} + ... + {\lambda _m}{f_m}$is finite and equal to the constrained infimum of${f_0}$.

      The meaning of such Lagrange multipliers is connected with a notion of perturbation, namely where the given program is perturbed by subtracting constants${u_i}$from the constraint functions${f_i}$. For each$u = \left( {{u_1},...,{u_m}} \right) \in {R^m}.$let$p\left( {{u_1},...,{u_m}} \right)$denote the infimum of${f_0}\left( x \right)$...

  10. PART VII. NONLINEAR PROGRAMMING
    • A COMPARATIVE STUDY OF NONLINEAR PROGRAMMING CODES
      (pp. 487-502)
      A. R. Colville

      Numerous algorithms have been developed over the years for the solution of nonlinear programming problems. Most of these algorithms and many variations of them have now been programmed for the computer. The comparative study described in this paper is an attempt to gather experimental evidence in order to evaluate the relative computational efficiency of many of these methods. Interest in this area stems from the need for the development of new large-scale nonlinear programming codes for third generation computer systems. In the past, a number of theoretical studies have been made describing and sometimes comparing different classes of nonlinear programming...

    • MINIMIZATION OF A SEPARABLE FUNCTION SUBJECT TO LINEAR CONSTRAINTS
      (pp. 503-510)
      V. De Angelis

      The paper discusses a method of using separable programming to minimize nonlinear functions of variables to linear inequality constraints. It is assumed that the objective function can be represented as the sum of nonlinear functions of single arguments. Following the normal procedure in separable programming, we introduce “special variables” representing the weights attached to points on the piece-wise linear approximations to these functions. The special feature of the method is that when a special variable drops from the basis, the reduced cost of the other neighbor of the one that remains from this group is computed, and if it is...

    • DIRECT HYPERCONE UNCONSTRAINED MINIMIZATION
      (pp. 511-522)
      L. Haller and I. G. T. Miller

      A digital method of unconstrained minimization is described including the adaption of the step size, the choice of directions within a specified hypercone, a stopping rule, and final convergence criteria. Examples are given.

      1. We shall describe a method for unconstrained optimization of a function of several variables, such that no derivatives are employed in the procedure, only function values. The method has the following features:

      a) a “learning” or “adaptive” step-length; otherwise the procedure may in certain circumstances prove exceedingly slow, see [1, 2];

      b) satisfactory performance not only on easy ridges but in narrow curved canyons as well;...

    • NONLINEAR PROGRAMMING AND SECOND VARIATION SCHEMES IN CONSTRAINED OPTIMAL CONTROL PROBLEMS
      (pp. 523-538)
      G. Horne and G. S. Tracz

      In a general way the problem of optimal control can be regarded as the problem of determining the controllable inputs to a given dynamic system, subject to various constraints, so that the outputs optimize some specified performance criterion. This paper presents a method for solving the following class of optimal control problems: Find the control vector u(t) in [t₀, t₁,] which minimizes a performance index of the form

      $J\left( u \right) = \int_{{t_0}}^{{t_1}} {L\left[ {X\left( t \right),u\left( t \right),t} \right]} dt$(1)

      while satisfying

      $\frac{{dx\left( t \right)}}{{dt}} = x\left( t \right) = f\left[ {x\left( t \right),u\left( t \right),t} \right],x({t_0}) = a$...

    • ON CONTINUOUS FINITE-DIMENSIONAL CONSTRAINED OPTIMIZATION
      (pp. 539-550)
      G. Zoutendijk

      This paper will deal with nonlinear programming which, in the restricted sense the word is being used to-day, is nothing else than constrained optimization in which all functions involved are continuous and have a finite number of arguments.

      Therefore, the title is much more descriptive, although less palatable than the usual name nonlinear programming.

      It is a reasonable requirement that any nonlinear programming method should work well in the following two important special cases:

      1. linear programming

      2. unconstrained optimization.

      For Case 1 we have available the simplex method which performs a vertex to vertex path to the optimum, each...

  11. PART VIII. PIVOTAL METHODS
    • QUADRATIC FORMS SEMI-DEFINITE OVER CONVEX CONES
      (pp. 551-566)
      R. W. Cottle, G. J. Habetler and C. E. Lemke

      We shall say that a differentiable function f defined over a convex set K is “K-flat” if its gradient vanishes at each of its zeros belonging to K. In particular, if f is quadratic and homogeneous, that is, f(z) = z’Mz, it is shown that if f is K-flat, then f is semidefinite on K. This furnishes a useful characterization of semi-definite matrices (when K is the whole space) and a large class of copositive matrices called “copositive-plus” (whenKis the nonnegative orthant). Preliminary conclusions are made which are expected to prove useful in further study. We note especially that a...

    • APPLICATIONS OF PRINCIPAL PIVOTING
      (pp. 567-582)
      T. D. Parsons

      When M is a square matrix, the linear system y = x M(or more generally, y = x M + q)allows us to form the inner product y.xT, which is the quadratic form x M xT(or x M xT+ q xT) for solutions x, y of the system. The technique of pivoting on principal submatrices of M is useful in the study of the properties of M and its associated quadratic form, since such pivoting preserves not only the solutions of the linear system (by giving “combinatorially equivalent” linear systems, in the sense of A.W. Tucker [13], [14],...

    • LEAST DISTANCE PROGRAMMING
      (pp. 583-588)
      A. W. Tucker

      Let C be a given real m by n matrix (with no all-zero column) and c a given real row n-tuple. Let λ be the row m-tuple of coordinates of a point in Euclidean m-space. Then$\left\{ {\lambda |\lambda C \ge c} \right\}$is a closed convex polyhedral set, the intersection of n closed halfspaces in mspace. The quadratic program

      minimize$\phi \left( \lambda \right) = \frac{1}{2}\lambda \lambda $’ constrained by $\lambda C \ge c$ (1)

      seeks the point λ* of the polyhedral set$\left\{ {\lambda |\lambda C \ge c} \right\}$at least distance from the origin (λ = 0). (Here ’ means transpose and ≥ means greater than or equal in each component.)

      Let z be a column...

  12. PART IX. ABSTRACTS
    (pp. 591-620)