"Well-behaved solutions in (convex) (non)linear optimization"
Abstract:
The ellipsoid method is a central algorithm in the theory of convex optimization. Famously, it yielded the first provably efficient algorithm for solving linear programs. Later, it was used to show equivalence of optimization and separation in (nonlinear) convex optimization. An important (but sometimes overlooked) detail in applying the ellipsoid method is that one needs an a priori bound on the norm of an optimum solution to achieve good runtime guarantees. The optimum solutions of linear programs arise as solutions to systems of linear equations, which allows one to bound their norm. A similar idea works for quadratic programs. For optimization of polynomials of larger degree, the situation is more complicated.
We show that convex polynomials (of arbitrary degree) admit well-behaved optima. The same is not true for their nonconvex counterparts. In particular, convex polynomials can thus be minimized efficiently using the ellipsoid method. As part of our proof we revisit an (erroneous) claim of O. Hesse (1851) on the geometry of polynomials whose Hessian determinant is identically zero.
Joint work with David Steurer and Manuel Wiedmer