Duality (optimization)
In general, the optimal values of the primal and dual problems need not be equal.For convex optimization problems, the duality gap is zero under a constraint qualification condition.The Lagrangian dual problem is obtained by forming the Lagrangian of a minimization problem by using nonnegative Lagrange multipliers to add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function.In general given two dual pairs of separated locally convex spacesThe latter condition is trivially, but not always conveniently, satisfied for the characteristic function (i.e.[2] The duality gap is the difference of the right and left hand sides of the inequality whereis the optimal primal value, then the duality gap is equal to[5] In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem.This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function.In the primal problem, the objective function is a linear combination of n variables.There are m constraints, each of which places an upper bound on a linear combination of the n variables.The goal is to maximize the value of the objective function subject to the constraints.In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction or subspace of directions to move that increases the objective function.Moving in any such direction is said to remove slack between the candidate solution and one or more constraints.An infeasible value of the candidate solution is one that exceeds one or more of the constraints.That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum.This intuition is made formal by the equations in Linear programming: Duality.To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets.They provide necessary conditions for identifying local optima of non-linear programming problems.There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to an optimal solution..The optimal solution to the dual program is a lower bound for the optimal solution of the original (primal) program; this is the weak duality principle.If the primal problem is convex and bounded from below, and there exists a point in which all nonlinear constraints are strictly satisfied (Slater's condition), then the optimal solution to the dual program equals the optimal solution of the primal program; this is the strong duality principle.In general this may be hard, as we need to solve a different minimization problem for every λ.[18]: Prop.3.2.2 Given a nonlinear programming problem in standard form with the domainare called the dual variables or Lagrange multiplier vectors associated with the problem.The dual function yields lower bounds on the optimal valueIf a constraint qualification such as Slater's condition holds and the original problem is convex, then we have strong duality, i.e.This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables[19] According to George Dantzig, the duality theorem for linear optimization was conjectured by John von Neumann immediately after Dantzig presented the linear programming problem.(Dantzig's foreword to Nering and Tucker, 1993) In support vector machines (SVMs), formulating the primal problem of SVMs as the dual problem can be used to implement the Kernel trick, but the latter has higher time complexity in the historical cases.
mathematical optimizationoptimization problemsweak dualityduality gapconvex optimizationconstraint qualificationstrong dualityWolfe dual problemFenchel dual problemLagrangianLagrange multipliersdual pairsseparatedlocally convex spacesminimuminfimumcharacteristic functionperturbation functionconvex conjugatesupremumconvex hullclosureepigraphDual linear programLinear programmingoptimizationobjective functionconstraintslinearvectorsubspacecandidate solutionnonlinear programmingKarush–Kuhn–Tucker conditionsstep functionSlater's conditionquadratic programmingFenchel's duality theoremsaddle pointGeorge DantzigJohn von Neumanngame theoryAlbert W. Tuckersupport vector machinesKernel trickConvex dualityDualityRelaxation (approximation)Ahuja, Ravindra K.Magnanti, Thomas L.Orlin, James B.Lemaréchal, ClaudeSagastizábal, Claudia A.Cook, William J.Pulleyblank, William R.Schrijver, AlexanderDantzig, George B.Lawler, EugeneRuszczyński, AndrzejPrinceton University PressEverett, Hugh IIIMathematics of Operations ResearchConvex analysisvariational analysisConvex combinationConvex functionConvex setTopics (list)Choquet theoryConvex geometryConvex metric spaceLagrange multiplierLegendre transformationLocally convex topological vector spaceSimplexConcaveClosedLogarithmicallyProperPseudo-Quasi-Invex functionSemi-continuitySubderivativeCarathéodory's theoremEkeland's variational principleFenchel–Moreau theoremFenchel-Young inequalityJensen's inequalityHermite–Hadamard inequalityKrein–Milman theoremMazur's lemmaShapley–Folkman lemmaUrsescuOrthogonallyEffective domainHypographJohn ellipsoidRadial setAlgebraic interiorZonotopeDual systemConvexity in economics