Semidefinite programming

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.In automatic control theory, SDPs are used in the context of linear matrix inequalities.SDPs are in fact a special case of cone programming and can be efficiently solved by interior point methods.In recent years, some quantum query complexity problems have been formulated in terms of semidefinite programs.For example, linear expressions involving nonnegative scalar variables may be added to the program specification.Therefore, SDPs are often formulated in terms of linear expressions on scalar products of vectors.This is because where the last inequality is because both matrices are positive semidefinite, and the result of this function is sometimes referred to as duality gap.When the value of the primal and dual SDPs are equal, the SDP is said to satisfy the strong duality property.Unlike linear programs, where every dual linear program has optimal objective equal to the primal objective, not every SDP satisfies strong duality; in general, the value of the dual SDP may lie strictly below the value of the primal, and the P-SDP and D-SDP satisfy the following properties: (i) Suppose the primal problem (P-SDP) is bounded below and strictly feasible (i.e., there existsIt is also possible to attain strong duality for SDPs without additional regularity conditions by using an extended dual problem proposed by Ramana.is the square matrix with values in the diagonal equal to the elements of the vectoras follows We can use the theory of Schur Complements to see that (Boyd and Vandenberghe, 1996) The semidefinite program associated with this problem is Semidefinite programs are important tools for developing approximation algorithms for NP-hard maximization problems.The first approximation algorithm based on an SDP is due to Michel Goemans and David P. Williamson (JACM, 1995).[1]: Chap.1  They studied the max cut problem: Given a graph G = (V, E), output a partition of the vertices V so as to maximize the number of edges crossing from one side to the other.However, Goemans and Williamson observed a general three-step procedure for attacking this sort of problem: For max cut, the most natural relaxation is This is an SDP because the objective function and constraints are all linear functions of vector inner products.Straightforward analysis shows that this procedure achieves an expected approximation ratio (performance guarantee) of 0.87856 - ε.Assuming the unique games conjecture, it can be shown that this approximation ratio is essentially optimal.Since the original paper of Goemans and Williamson, SDPs have been applied to develop numerous approximation algorithms.Subsequently, Prasad Raghavendra has developed a general framework for constraint satisfaction problems based on the unique games conjecture.SDPs are also used in geometry to determine tensegrity graphs, and arise in control theory as LMIs, and in inverse elliptic coefficient problems as convex, non-linear, semidefiniteness constraints.Let L be the affine subspace of matrices in Sn satisfying the m equational constraints; so the SDP can be written as:Let R be an explicitly given upper bound on the maximum Frobenius norm of a feasible solution, and ε>0 a constant.The ellipsoid returns one of the following outputs: The run-time is polynomial in the binary encodings of the inputs and in log(R/ε), in the Turing machine model.Note that, in general, R may be doubly-exponential in n. In that case, the run-time guarantee of the ellipsoid method is exponential in n. But in most applications, R is not so huge.Most codes are based on interior point methods (CSDP, MOSEK, SeDuMi, SDPT3, DSDP, SDPA).These are robust and efficient for general linear SDP problems, but restricted by the fact that the algorithms are second-order methods and need to store and factorize a large (and often dense) matrix.First-order methods for conic optimization avoid computing, storing and factorizing a large Hessian matrix and scale to much larger problems than interior point methods, at some cost in accuracy.A first-order method is implemented in the Splitting Cone Solver (SCS).Other algorithms use low-rank information and reformulation of the SDP as a nonlinear programming problem (SDPLR, ManiSDP).Hazan[13] has developed an approximate algorithm for solving SDPs with the additional constraint that the trace of the variables matrix must be 1.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
mathematical programmingobjective functionmatricesaffine spacespectrahedronoperations researchcombinatorial optimizationlinear matrix inequalitiescone programminginterior point methodslinear programsquadratic programsvia hierarchies of SDPsoptimizationlinear programmingpolytopedot productGram matrixself-adjointeigenvaluesinner productslack variablesscalarscalar productCholesky decompositionconvex coneconic optimizationlinear programweak dualitystrong dualitySlater's conditioncorrelation coefficientscorrelation matrixMichel GoemansDavid P. Williamsonmax cut problempartitioninteger quadratic programP = NPunique games conjecturemax cutapproximation ratioconformal field theoriesconformal bootstrapdecision problemTuring machineBlum–Shub–Smale machineellipsoid methodFrobenius normFrobenius distancealternating direction method of multipliersAugmented Lagrangian methodnonlinear programmingSquare-root sum problemBibcodeCiteSeerXLászló LovászAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonGradientMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansNewton's methodConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftwareMathematical optimization softwareMathematicaModelingAPMonitorECLiPSeGNU MathProgMiniZincOptimJTOMLABXpress MoselSolversANTIGONEArtelys KnitroFortMPGLPK/GLPSOLGurobi OptimizerLp_solveOcteract EngineXpress OptimizerXpress NonLinearCouenneGalahad libraryMIDACONLPQLPXpress GlobalGecodeXpress KalisList of optimization softwareComparison of optimization software