On Quadratic Internal Model Principle in
I C I Bucharest
(National Institute for R & D in Informatics)
Center for Advanced Modeling and Optimization
8-10 Averescu Blvd.
011455 Bucharest 1, Romania
Abstract: We show that for continuous mathematical programming problems every optimization algorithm must encapsulate implicitly or explicitly a quadratic internal model of the problem to be solved which represents the essence of the problem from the view point of the algorithm. Optimization models are coming from the conservation laws. For systems which obey the principle of least action the Noether’s theorem expresses the equivalence between the conservation laws and symmetries. But mathematically, symmetries are expressed by quadratic forms. Therefore, at the heart of every real optimization model is a quadratic form. The quadratic internal model principle says that this quadratic form modified in order to imbed the main ingredients of the optimization algorithm represents the quadratic internal model of the optimization algorithm.
MSC: 49M07, 49M10, 90C06, 65K.
Keywords: Nonlinear optimization, line search methods, Newton system, quadratic programming, Noether theorem.
Neculai Andrei graduated the Faculty of Mathematics, “A.I. Cuza” University in Iasi, and the Faculty of Automatics and Computer Science at “Politehnica” University in Bucharest. He took his PhD degree for his contributions to Digraph Approach of Large-Scale Sparse Linear Dynamic Systems in 1984. He is a senior scientific researcher at National Institute for R&D in Informatics – ICI, since 1973. He is author of 15 books and text books, as well as over 100 published papers and 400 Technical Reports in area of mathematical modeling and optimization. He is the key architect of ALLO language and compiler for modeling in mathematical programming, as well as of some professional packages for linear programming and large-scale unconstrained optimization. His main current scientific interests centre on modeling for mathematical programming and on large-scale non-linear optimization, conjugate gradient, interior point methods, penalty and barrier approaches. Dr. Andrei is a founding member of the Operations Research Society of Romania; founding member of the Center for Advanced Modeling and Optimization; member of the Editorial Board of Computational Optimization and Applications – Springer Verlag (since 1992); member of Computational Optimization and Applications – Optimization Forum; member of the Editorial Board of Journal of Systems Science and Complexity – Science Press – China and Allerton Press – USA (since 2001). He is Editor-in-Chief to the Advanced Modeling and Optimization – An Electronic International Journal – ICI Press (since 1999) and Editor-in-Chief to the Romanian Journal of Informatics and Automatics. Dr. Andrei is an Alexander-von-Humboldt fellow, member ERCIM and member of Academy of Romanian Scientists (since 2001).
CITE THIS PAPER AS:
Neculai ANDREI, On Quadratic Internal Model Principle in Mathematical Programming, Studies in Informatics and Control, ISSN 1220-1766, vol. 18 (4), pp. 337-348, 2009.
h(x) = 0,
where and , can be considered as a generator of a sequence of points which satisfy the constraints of the problem in such a way that which satisfy the constraints of the problem in such a way that , where x* is a local solution of the problem. The line search methods are characterized by two main actions. At the iteration k a search direction dk is generated and then a suitable point is computed by a step length so that a reduction of the minimizing function or of a merit function (a penalty function) is obtained. The main action in any optimization algorithm is the design of the generator of directions dk. The step length is computed using the standard procedures of Armijo or of Wolfe in order to reduce the values of the function f or of a merit function. Plenty of nonlinear optimization algorithms are known and there are a lot of papers and books presenting them from the viewpoint of theoretical and computational aspects.
To solve the above problem, or a more general version of it with inequality constraints, each optimization algorithm must “understand” it. There is a large diversity of optimization algorithms. Many of them solve a constrained optimization problem by converting it to a sequence of unconstrained problems via Lagrangian multipliers or via penalty or barrier functions. Another class of methods solves nonlinear programming problems by moving from a feasible point to a new improved one along a feasible direction. However, every optimization algorithm, in one way or another, is based on the Karush-Kuhn-Tucker optimality conditions. Generally, these conditions are expressed as a nonlinear algebraic system. In the framework of the Newton machine this nonlinear system is reduced to a sequence of linear algebraic systems, which is equivalent to a sequence of quadratic programming problems. The quadratic internal model principle in mathematical programming states that “an optimization algorithm must encapsulate implicitly or explicitly a quadratic internal model of the problem to be solved“. Every optimization algorithm uses its own quadratic internal model which takes into account the main ingredients defining the algorithm. This is the minimal part that must be encapsulated by the algorithm in order to solve the problem .
The philosophical motivation behind the quadratic internal model principle in mathematical programming is as follows.
As known, the mathematical model of a physical reality is based on the conservation laws. In physics, a conservation law states that a particular measurable property of an isolated system does not change while the system evolves. Any particular conservation law is a mathematical identity to certain symmetry of a physical system. For systems which obey the principle of the least action and therefore have a Lagrangian (see , ) the Noether’s theorem  expresses the equivalence between conservation laws and the invariance of physical laws with respect to certain transformations called symmetries. The behavior of a physical system can often be expressed in terms of a specific function of the system variables, called Lagrangian. The system follows a path through the phase space such that the integral of the Lagrangian is stationary. For a system with Lagrangian L of the variables q and the equation of motion is
From this equation Noether specified that if the quantity on the right hand term is zero (meaning that L is symmetrical over q), then the rate of change of the quantity in parentheses on the left side is also zero, i.e. it is a conserved quantity. Generally, any symmetry of the Lagrangian function corresponds to a conserved quantity, and vice versa. It seems that at the fundamentals of our cognoscible universe lie the concept of symmetry. But, mathematically symmetries are expressed by quadratic forms – a homogeneous polynomial of degree two in a number of variables. It is worth saying that the quadratic forms are central objects in mathematics and they are ubiquitous in physics and chemistry. Quadratic forms occur in number theory, Riemannian geometry, Lie theory and they always express energy of a system, particularly in relation to the L2 norm, which leads us to the use of the concept of Hilbert spaces . Therefore, it is quite natural to see that at the heart of every mathematical model is a quadratic form. This quadratic form must be replicated in an optimization algorithm in order to get a solution of the corresponding problem. This is the quadratic internal model principle in mathematical optimization.
The structure of this paper is as follows. In section 2 following the synthesis of Yuan  we present some ingredients showing that, in the context of Karush-Kuhn-Tucker theory, every optimization algorithm for solving general equality constrained optimization problems reduce to a nonlinear algebraic system. Using the Newton machine, this nonlinear system reduces further to a sequence of linear algebraic systems, known as Newton systems. We show that these Newton systems have the same structure, however being different for specific optimization methods. In section 3 the quadratic internal model principle for nonlinear optimization is presented, showing that at the heart of every optimization algorithm there is a quadratic model (quadratic programming problem) imbedding the main ingredients of the algorithm .
- ANDREI, N., Internal Model Principle in Mathematical Programming, Newsletter 16 of EUROPT – The Continuous Optimization Working Group of EURO, September 2009.
- BEALE, E. M. L., Numerical Methods, in Nonlinear Programming, edited by J. Abadie, North-Holland, Amsterdam, 1967.
- BENSON, H.Y., D. F. VANDERBEI, Interior Point Methods for Nonconvex Nonlinear Programming: Jamming and Comparative Numerical Testing. Mathematical programming, 99, 2004, pp. 35-48.
- CELIS, M. R., DENNIS, JR., R. A. TAPIA, A Trust Region Algorithm for Nonlinear Equality Constrained Optimization, in P.T. Boogs, R.H. Byrd and R.B. Schnabel (Eds.) Numerical Optimization, SIAM, Philadelphia, 1985, pp. 71-82.
- COLEMAN, T. F., Y. LI, A Trust Region and Affine Scaling Interior Point Method for Nonconvex Minimization with Linear Inequality Constraints, Mathematical Program, 88 2000, pp. 1-31.
- FOURIER, J. B. J., Mémoire sur la statique, contenant la démonstration du principe des vitesses virtuelles et la théorie des moments. Journal de l’École Polytechnique, tome 2, chaier 5, 1798.
- HAN, S. P., A Globally Convergent Method for Nonlinear Programming. Journal for Optimization Theory and Applications, 22 1977, pp. 297-309.
- LAGRANGE, J,-L., Théorie des functions analytiques. Imp. De la République, Paris, 1797.
- NOETHER, E., Invariante Varlationsproblems. Nachrrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen, Math-phys. Klasse 1918, pp.235-257. [English translation, M.A. Travel, Invariant Variation Problems, Transport Theory and Statistical Physics 1 (3) 1971, pp. 1183-207.
- PALOMARES, G., O. L. MANGASARIAN, Superlinearly Convertgent Quasi-Newton Algorithms for Nonlinearly Constrained Optimization Problems. Mathematical Programming, 11, 1976, pp. 1-13.
- POWELL, M. J. D., The Convergence of Variable Metric Methods for Nonlinearly Constrained Optimization Calculations, in Nonlinear Programming 3, edited by O.L. Mangasarian, R.R. Meyer and S.M. Robinson, Academic Press, London, 1978.
- POWELL, M. J. D., Y. YUAN, A Trust Region Algorithm for Equality Constrained Optimization. Math. Prog. 49 1991, pp. 189-211.
- SCHARLAU, W., On the History of the Algebraic Theory of Quadratic Forms. In Quadratic Forms and their Applications, Proceedings of the Conference on Quadratic Forms and their Applications, July 509, 1999, University College Dublin, E. Bayer-Fluckiger, D. Lewis and A. Ranicki (Eds.). Published as Contemporary Mathematics 272, A.M.S., 2000, pp. 229-259.
- VANDERBEI, R. J., Linear Programming. Foundations and Extensions. Kluwer Academic Publishers, Boston, 1996.
- WILSON, R. B., A Simplicial Algorithm for Concave Programming. Ph.D. Thesis, Harvard University, Graduate School of Business Administration, 1963.
- YUAN, Y., Linear Systems Associated with Numerical Methods for Constrained Optimization. Journal of Computational Mathematics, 21 2003, pp. 71-84.