Affine scaling

E. R. Barnes at IBM,[2] a team led by R. J. Vanderbei at AT&T,[3] and several others replaced the projective transformations that Karmarkar used by affine ones.After a few years, it was realized that the "new" affine scaling algorithms were in fact reinventions of the decades-old results of Dikin.[1][4] Of the re-discoverers, only Barnes and Vanderbei et al. managed to produce an analysis of affine scaling's convergence properties.The scaling ensures that the algorithm can continue to do large steps even when the point under consideration is close to the feasible region's boundary.[5]: 337 Formally, the iterative method at the heart of affine scaling takes as inputs A, b, c, an initial guess x0 > 0 that is strictly feasible (i.e., Ax0 = b), a tolerance ε and a stepsize β.
The affine scaling method is an interior point method , meaning that it forms a trajectory of points strictly inside the feasible region of a linear program (as opposed to the simplex algorithm , which walks the corners of the feasible region).
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
feasible regionsimplex algorithmmathematical optimizationalgorithmlinear programminginterior point methodSovietmultiple discoveryDoklady Akademii Nauk SSSRKarmarkar's algorithmpolynomial timeR. J. Vanderbeiprojective transformationsKarmarkaraffinefeasibleiterative methodgradient descentdiagonal matrixslacknesschaoticTransactions of the American Mathematical SocietyBibcodeCiteSeerXMIT OpenCourseWareRensselaer Polytechnic InstituteOptimizationAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSDavidon–Fletcher–PowellSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonGradientMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansNewton's methodConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftware