Barrier function

In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value increases to infinity as its argument approaches the boundary of the feasible region of an optimization problem.[1][2] Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle.A barrier function is also called an interior penalty function, as it is a penalty function that forces the solution to remain within the interior of the feasible region.Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods.Consider the following constrained optimization problem: where b is some constant.If one wishes to remove the inequality constraint, the problem can be reformulated as This problem is equivalent to the first.It gets rid of the inequality, but introduces the issue that the penalty function c, and therefore the objective function f(x) + c(x), is discontinuous, preventing the use of calculus to solve it.A barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above.Using such a function, a new optimization problem is formulated, viz.This problem is not equivalent to the original, but as μ approaches zero, it becomes an ever-better approximation.This essentially relies on the fact thatThis introduces a gradient to the function being optimized which favors less extreme values of), while having relatively low impact on the function away from these extremes.Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
Barrier certificateBarrier methodsoptimizationmathematicscontinuous functionfeasible regionconstraintsinterior point methodsdiscontinuouscalculusPenalty methodAugmented Lagrangian methodVanderbei, Robert J.AlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonGradientMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansNewton's methodConstrained nonlinearPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftware