Interior-point method

Karmarkar's paper created a surge of interest in interior point methods.Two years later, James Renegar invented the first path-following interior-point method, with run-timeYurii Nesterov and Arkadi Nemirovski came up with a special class of such barriers that can be used to encode any convex set.They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.A numerical solver for a given family of programs is an algorithm that, given the coefficient vector, generates a sequence of approximate solutions xt for t=1,2,..., using finitely many arithmetic operations.A solver is called polynomial if the total number of arithmetic operations in the first T steps is at mostpoly(problem-size) * log(V/ε),where V is some data-dependent constant, e.g., the difference between the largest and smallest value in the feasible set.Therefore, a solver is 'polynomial' if each additional digit of accuracy requires a number of operations that is polynomial in the problem size.All limit points of x*, as t approaches infinity, are optimal solutions of the original program (P).This requires to specify three things: The main challenge in proving that the method is polytime is that, as the penalty parameter grows, the solution gets near the boundary, and the function becomes steeper.The run-time of solvers such as Newton's method becomes longer, and it is hard to prove that the total runtime is polynomial.Thus, the solution accuracy is proportional to 1/ti, so to add a single accuracy-digit, it is sufficient to multiply ti by 2 (or any other constant factor), which requires O(sqrt(m)) Newton steps.Therefore, many other classes of convex programs can be solved in polytime using a path-following method, if we can find a suitable self-concordant barrier function for their feasible region.We are given a convex optimization problem (P) in "standard form":minimize cTx s.t.We assume that we can compute efficiently the value of b, its gradient, and its Hessian, for every point x in the interior of G. For every t>0, we define the penalized objective ft(x) := tcTx + b(x).For each ti, we find an approximate minimum of fti, denoted by xi.The approximate minimum is chosen to satisfy the following "closeness condition" (where L is the path tolerance):where the constant factor O(1) depends only on r and L. The number of Newton steps required for the two-step initialization procedure is at most:[3]: Thm.4.5.1The theoretic guarantees assume that the penalty parameter is increased at the rateIn theory, if μ is larger (e.g. 2 or more), then the worst-case number of required Newton steps is inFor potential-reduction methods, the problem is presented in the conic form:[3]: Sec.5  minimize cTx s.t.To use the potential-reduction method (specifically, the extension of Karmarkar's algorithm to convex programming), we need the following assumptions:[3]: Sec.6Assumption C is specific to Karmarkar's approach; it can be alleviated by using a "sliding objective value".The method is based on the following scalar potential function:v(x) = F(x) + M ln (sTx)where F is the M-self-concordant barrier for the feasible cone.The idea of the potential-reduction method is to modify x such that the potential at each iteration drops by at least a fixed constant X (specifically, X=1/3-ln(4/3)).But in practice, Karmarkar's method allows taking much larger steps towards the goal, so it may converge much faster than the theoretical guarantees.The primal-dual method's idea is easy to demonstrate for constrained nonlinear optimization.This inequality-constrained optimization problem is solved by converting it into an unconstrained objective function whose minimum we hope to find efficiently.: Here are some special cases of convex programs that can be solved efficiently by interior-point methods.Therefore, the number of required Newton steps for the path-following method is O(mn2), and the total runtime complexity is O(m3/2 n2).After converting to the standard form, we can apply path-following methods with a self-concordant barrier with parameter M=4m.
Example search for a solution. Blue lines show constraints, red points show iterated solutions.
Trajectory of the iterates of x by using the interior point method.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
algorithmslinearnon-linearconvex optimizationpolynomialsimplex methodellipsoid methodfeasible regionNarendra Karmarkarlinear programmingKarmarkar's algorithmJames Renegarself-concordantbarrier functionconvex setlinear functionepigraphfeasible setnonlinear programmingsequential quadratic programmingYurii NesterovArkadi NemirovskiiterationsMehrotra's predictor–corrector algorithmconvex programconvex functionpositive definiteNewton's methodlogarithmicYuri Nesterovself-concordant barrierHessiandamped Newton methodlinear subspaceaffine planeconvex conescalar potentialnonlinear optimizationgradientLagrange multiplierKKT conditionsJacobianHessian matrixdiagonal matrixLinear programsQuadratically constrained quadratic programspositive-semidefinite matricesLp normGeometric programsSemidefinite programsAffine scalingAugmented Lagrangian methodChambolle-Pock algorithmKarush–Kuhn–Tucker conditionsPenalty methodCambridge University PressBibcodeLemaréchal, ClaudeSagastizábal, Claudia A.OptimizationmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSuccessive linear programmingConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodquadraticEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftware