Davidon–Fletcher–Powell formula

The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition.It was the first quasi-Newton method to generalize the secant method to a multidimensional problem.This update maintains the symmetry and positive definiteness of the Hessian matrix.Given a function, its gradient (), and positive-definite Hessian matrix, the Taylor series is and the Taylor series of the gradient itself (secant equation) is used to updateThe DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value ofis a symmetric and positive-definite matrix.The corresponding update to the inverse Hessian approximationis assumed to be positive-definite, and the vectorsmust satisfy the curvature condition The DFP formula is quite effective, but it was soon superseded by the Broyden–Fletcher–Goldfarb–Shanno formula, which is its dual (interchanging the roles of y and s).[1] By unwinding the matrix recurrence for, the DFP formula can be expressed as a compact matrix representation.Specifically, defining{\displaystyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots &s_{k-1}\end{bmatrix}},}{\displaystyle Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots &y_{k-1}\end{bmatrix}},}and upper triangular and diagonal matrices{\displaystyle {\big (}R_{k}{\big )}_{ij}:={\big (}R_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{YS}}{\big )}_{ij}=y_{i-1}^{T}s_{j-1},\quad (D_{k})_{ii}:={\big (}D_{k}^{\text{SY}}{\big )}_{ii}=s_{i-1}^{T}y_{i-1}\quad \quad {\text{ for }}1\leq i\leq j\leq k}the DFP matrix has the equivalent formula{\displaystyle N_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{YS}}\\{\big (}R_{k}^{\text{YS}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{bmatrix}}}The inverse compact representation can be found by applying the Sherman-Morrison-Woodbury inverse toThe compact representation is particularly useful for limited-memory and constrained problems.
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
William C. DavidonRoger FletcherMichael J. D. Powellquasi-Newton methodsecant methodHessian matrixgradientpositive-definiteTaylor seriespositive-definite matrixBroyden–Fletcher–Goldfarb–Shanno formulacompact matrix representationSherman-Morrison-Woodbury inverseNewton's methodNewton's method in optimizationBroyden–Fletcher–Goldfarb–Shanno (BFGS) methodLimited-memory BFGS methodSymmetric rank-one formulaNelder–Mead methodCompact quasi-Newton representationOptimizationAlgorithmsmethodsheuristicsUnconstrained nonlinearFunctionsGolden-section searchPowell's methodLine searchSuccessive parabolic interpolationGradientsConvergenceTrust regionWolfe conditionsQuasi–NewtonBerndt–Hall–Hall–HausmanBroyden–Fletcher–Goldfarb–ShannoL-BFGSSymmetric rank-one (SR1)Other methodsConjugate gradientGauss–NewtonMirrorLevenberg–MarquardtPowell's dog leg methodTruncated NewtonHessiansConstrained nonlinearBarrier methodsPenalty methodsAugmented Lagrangian methodsSequential quadratic programmingSuccessive linear programmingConvex optimizationConvex minimizationCutting-plane methodReduced gradient (Frank–Wolfe)Subgradient methodLinearquadraticAffine scalingEllipsoid algorithm of KhachiyanProjective algorithm of KarmarkarBasis-exchangeSimplex algorithm of DantzigRevised simplex algorithmCriss-cross algorithmPrincipal pivoting algorithm of LemkeActive-set methodCombinatorialApproximation algorithmDynamic programmingGreedy algorithmInteger programmingBranch and boundGraph algorithmsMinimum spanning treeBorůvkaKruskalShortest pathBellman–FordDijkstraFloyd–WarshallNetwork flowsEdmonds–KarpFord–FulkersonPush–relabel maximum flowMetaheuristicsEvolutionary algorithmHill climbingLocal searchParallel metaheuristicsSimulated annealingSpiral optimization algorithmTabu searchSoftware