Anders Skjäl
Global optimization with twice differentiable constraints

April 25, 2014

Consider a nonlinear programming (NLP) problem where some of the constraints are nonconvex but smooth. If we have the constraint f(x) < = 0 we can overestimate and convexify the feasible domain by substituting the constraint with g(x) <= 0 where g is convex and g <= f. In the alphaBB method [1] this is done by adding univariate quadratic perturbations to f. The effect on the Hessian is the addition of a scalar diagonal matrix P.

The function g is convex when the Hessian Hg = Hf + P is positive semi-definite everywhere in the domain. To work directly with the Hessian expressions, depending on x, is typically as hard as the original problem. The Hessian is instead approximated by an interval matrix [Hg] = [Hf] + P, covering all possible values of Hg(x). Some theoretical result is needed to guarantee that P makes [Hg] positive semi-definite. One intuitive choice is Gerschgorin’s Circle Theorem from linear algebra, which can be extended to interval matrices. This and some other possibilities were described and tested in (1).

The Hessian perturbation P was initially limited to be diagonal. In my research I have described the possibilities of allowing nondiagonal P, corresponding to additional bilinear perturbations in g. The theoretical framework for this is described in (2). I am also interested in applying stronger convexity conditions than the interval extension of Gerschgorin’s theorem.

Some references

(1) C.S. Adjiman, S. Dallwig, C.A. Floudas, A. Neumaier: A global optimization method, alphaBB, for general twice-differentiable constrained NLPs. Computers & Chemical Engineering 22(9), 1137–1158 (1998).

(2) Skjäl, A., Westerlund, T., Misener, R., Floudas, C.A.: A generalization of the classical alphaBB convex underestimation via diagonal and non-diagonal quadratic terms. Journal of Optimization Theory and Applications 154(2), 462–490 (2012)