Symmetric rank-one

The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points.It is a generalization to the secant method for a multidimensional problem.This update maintains the symmetry of the matrix but does not guarantee that the update be positive definite.The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives (BFGS or DFP), in preliminary numerical experiments.[1][2] The SR1 method has computational advantages for sparse or partially separable problems.[3] A twice continuously differentiable function) and Hessian matrixhas an expansion as a Taylor series at, which can be truncated its gradient has a Taylor-series approximation also which is used to updateThe above secant-equation need not have a unique solutionThe SR1 formula computes (via an update of rank 1) the symmetric solution that is closest[further explanation needed] to the current approximate-value: where The corresponding update to the approximate inverse-Hessianis One might wonder why positive-definiteness is not preserved — after all, a rank-1 update of the formThe explanation is that the update might be of the forminstead because the denominator can be negative, and in that case there are no guarantees about positive-definiteness.The SR1 formula has been rediscovered a number of times.Since the denominator can vanish, some authors have suggested that the update be applied only if whereis a small number, e.g.[4] The SR1 update maintains a dense matrix, which can be prohibitive for large problems.Similar to the L-BFGS method also a limited-memory SR1 (L-SR1) algorithm exists.[5] Instead of storing the full Hessian approximation, a L-SR1 method only stores theis an integer much smaller than the problem size (The limited-memory matrix is based on a compact matrix representationSince the update can be indefinite, the L-SR1 algorithm is suitable for a trust-region strategy.Because of the limited-memory matrix, the trust-region L-SR1 algorithm scales linearly with the problem size, just like L-BFGS.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
quasi-Newton methodsecant methodpositive definitesparsegradientHessian matrixTaylor seriesL-BFGScompact matrix representationtrust-regionBroyden's methodNewton's method in optimizationBroyden-Fletcher-Goldfarb-Shanno (BFGS) methodL-BFGS methodCompact quasi-Newton representationOptimizationAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchNelder–Mead methodSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoDavidon–Fletcher–PowellOther methodsConjugate gradientGauss–NewtonMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansNewton's methodConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftware