Convex optimization

attaining In general, there are three options regarding the existence of a solution:[7]: chpt.4 A convex optimization problem is in standard form if it is written as where:[7]: chpt.4 The feasible set[7]: chpt.2 Many optimization problems can be equivalently formulated in this standard form.can be re-formulated equivalently as the problem of minimizing the convex functionThis is because any program with a general objective can be transformed into a program with a linear objective by adding a single variable t and a single constraint, as follows:[9]: 1.4 Every convex program can be presented in a conic form, which means minimizing a linear objective over the intersection of an affine plane and a convex cone:[9]: 5.1 where K is a closed pointed convex cone, L is a linear subspace of Rn, and b is a vector in Rn.A linear program in standard form is the special case in which K is the nonnegative orthant of Rn.This means that, in principle, one can restrict attention to convex optimization problems without equality constraints.In practice, however, it is often preferred to retain the equality constraints, since they might make some algorithms more efficient, and also make the problem easier to understand and analyze.The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:[7]: chpt.4 [10] Other special cases include; The following are useful properties of convex optimization problems:[11][7]: chpt.4 These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.As the equality constraints are all linear, they can be eliminated with linear algebra and integrated into the objective, thus converting an equality-constrained problem into an unconstrained one.In the class of unconstrained (or equality-constrained) problems, the simplest ones are those in which the objective is quadratic.For these problems, the KKT conditions (which are necessary for optimality) are all linear, so they can be solved analytically.[7]: chpt.11 For unconstrained (or equality-constrained) problems with a general convex objective that is twice-differentiable, Newton's method can be used.[7]: chpt.11 Newton's method can be combined with line search for an appropriate step size, and it can be mathematically proven to converge quickly.A common way to solve them is to reduce them to unconstrained problems by adding a barrier function, enforcing the inequality constraints, to the objective function.[7]: chpt.11 They have to be initialized by finding a feasible interior point using by so-called phase I methods, which either find a feasible point or show that none exist.Phase I methods generally consist of reducing the search in question to a simpler convex optimization problem.[citation needed] Consider a convex minimization problem given in standard form by a cost functionThere is a large software ecosystem for convex optimization.Solvers implement the algorithms themselves and are usually written in C. They require users to specify optimization problems in very specific formats which may not be natural from a modeling perspective.Modeling tools are separate pieces of software that let the user specify an optimization in higher-level syntax.They manage all transformations to and from the user's high-level model and the solver's input/output format.The table below shows a mix of modeling tools (such as CVXPY and Convex.jl) and solvers (such as CVXOPT and MOSEK).Octave Convex optimization can be used to model problems in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design,[7]: 17  data analysis and modeling, finance, statistics (optimal experimental design),[21] and structural optimization, where the approximation concept has proven to be efficient.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
mathematical optimizationconvex functionsconvex setsconcave functionsNP-hardconvex functionconvex subsetaffine transformationssublevel setsconcave functionlinear functionconstraintpointed convex conelinear subspacelinear programmingquadratic programmingsecond-order cone programsemidefinite programmingconic optimizationSecond order cone programmingLeast squaresQuadratic minimization with convex quadratic constraintsGeometric programmingEntropy maximizationlocal minimumglobal minimumfunctional analysisHilbert projection theoremseparating hyperplane theoremFarkas' lemmalinear algebraquadraticKKT conditionsNewton's methodline searchgradient descentsteepest descentbarrier functioninterior point methodsInterior-point methodsself-concordantCutting-plane methodsEllipsoid methodSubgradient methodDual subgradients and the drift-plus-penalty methoddual problemdrift-plus-penaltyLagrange multiplierLagrange multipliersscalarsMATLABPythonrobust optimizationmixed integer linear programmingcontrol systemssignal processingcircuit designfinancestatisticsoptimal experimental designstructural optimizationPortfolio optimizationstatistical regressionregularizationquantile regressionmulticlass classificationElectricity generationCombinatorial optimizationuncertaintybiconvexpseudo-convexquasiconvexconvex analysisDualityKarush–Kuhn–Tucker conditionsOptimization problemProximal gradient methodAlgorithmic problems on convex setsCambridge University PressCiteSeerXRuszczyńskiBertsekasAhmad BazziBertsekas, Dimitri P.Lemaréchal, ClaudeRockafellar, R. T.Ruszczyński, AndrzejOptimizationAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonGradientMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)LinearAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftwarevariational analysisConvex combinationConvex setTopics (list)Choquet theoryConvex geometryConvex metric spaceLegendre transformationLocally convex topological vector spaceSimplexConvex conjugateConcaveClosedLogarithmicallyProperPseudo-Quasi-Invex functionSemi-continuitySubderivativeCarathéodory's theoremEkeland's variational principleFenchel–Moreau theoremFenchel-Young inequalityJensen's inequalityHermite–Hadamard inequalityKrein–Milman theoremMazur's lemmaShapley–Folkman lemmaUrsescuConvex hullOrthogonallyEffective domainEpigraphHypographJohn ellipsoidRadial setAlgebraic interiorZonotopeDual systemDuality gapStrong dualityWeak dualityConvexity in economics