Edited by Athanasios Migdalas, Panos M. Pardalos and Sverre Story,
Kluwer Academic Publishers, Dordrecht, 1997, 585 pp.,
ISBN 0-7923-4583-5, $319.50
These two books, which have been published in almost the same time, are addressed to a relatively large audience. They may be of interest to people working on parallel optimization algorithms. The first book, in particular, may be of interest to readers involved in the real-life applications of optimization modeling. In addition, operations research and computer science students may benefit from them. Both books may be used as textbooks for graduate courses in those specializations. However, some background in mathematical analysis is necessary as the introductory requirement.
In my opinion, those two books together constitute an ostensible state-of-the-art summary of parallel optimization. Book by Censor and Zenios is much more consistent and homogeneous and contains, among others, description of a relatively new approach based on the generalized projections. The second book is very carefully edited and contains the set of separate although well selected papers.
The first book is devoted to a very important question how to use parallelism when solving large-scale optimization models. The authors consider two main approaches. The first, a relatively new one, relies on the sequential orthogonalization and its generalization based on the generalized Bregman's projections. Algorithms belonging to that group facilitate parallel computations due to the structure of their operations. The second approach exploits the structure of the model and some sparsity patterns often existing in the large-scale optimization models. Such structures arise, for instance, in a natural way in large spatial systems (in transportation or telecommunication problems). The authors have investigated decomposition algorithms based on the linearization or diagonal-quadratic approximations. In that way, they have obtained an algorithm with relatively simple coordinating (master) problem. Their third attempt to parallelize optimization algorithms concerns the primal-dual path-following algorithm (an example of the interior point algorithms). All interior point methods require at each step solution of a linear system of equations involving matrix AATwhere A represents the linear constraints matrix in the linear or quadratic programming problem in question. The authors have shown how to exploit sparsity solving that system of linear equations in parallel.
The book is a nice combination of a sound mathematical theory and applications. Part one of the book is devoted to the theory of generalized distances and projections and their use in proximal minimization applied to linear programming problems. It contains also some elements of penalty, barrier and augmented Lagrangian methods theory. Second part describes iterative projections algorithms, model decomposition algorithms and interior point algorithms developed on the theoretical basis from part one. Third part presents optimization models for such problems as matrix estimation problem, image recognition from projections, treatment planning in the radiation therapy, multicommodity network flow problems, planning under uncertainty. At the end, the reader finds discussion of the implementation issues and cited results of computations.
The second book is in some sense complementary to the first one. Its range stretches from the theoretical models for parallel algorithms design and their complexity, vie of parallel computers through eyes of an experienced programmer, through sparse linear systems arising in various optimization problems. It contains even a paper devoted to the variational inequalities problem. However, it does not even touch optimization modeling and real life applications. This last feature is the strongest merit of the book written by Censor and Zenios, whose first theoretical part of the book is motivated by the applications considered in part two.
Book edited by Migdalas, Pardalos and Story covers broader scope of optimization algorithms. For instance, discrete and stochastic optimization problems and variational inequalities, which are not represented in the first book (restricted practically to the linearly constrained continuous optimization problems). The first two chapters discuss the theoretical model for parallel algorithm design and for their complexity. Third chapter presents a survey on current high performance parallel computer architectures and discusses their performance bottlenecks. Fourth chapter is devoted to scalable parallel algorithms for sparse linear systems and the fifth one investigates automatic parallelization of the computation of the ordinary differential equations systems arising in the 2D-bearing mechanical problem.
The next eight chapters are devoted to the optimization problems and methods. Chapter six contains a survey of parallel algorithms for network problems and a thorough discussion of the implementation of a parallel solver for the traffic assignment problem. In chapter seven one finds review on the sequential branch and bound method and a discussion of the problems connected with its parallelization. Chapter eight presents parallelizaton of heuristic methods for combinatorial optimization. Chapter nine contains analysis of decomposition algorithms for differentiable optimization. Chapter ten describes parallel algorithms for finite dimensional variational inequalities and chapter eleven parallel algorithms for stochastic programming. Chapter twelve deals with the heuristic algorithms for global optimization problems while chapter thirteen presents logarithmic barrier function algorithms for neural network training.
The first book, in my opinion, would be a valuable supplement to a private library of any person, student, researcher or practitioner interested in operations research and various aspects of parallel optimization. The second book is almost four times as expensive as the first one. In my opinion, its content does not justify such big difference in price. Especially, since approximately one-third of its material may be found in general, much cheaper books on parallel computing..
Technical University of Warsaw