In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems.[1] Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information.It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method.Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000).The BFGS-B variant handles simple box constraints.[3] The BFGS matrix also admits a compact representation, which makes it better suited for large constrained problems.The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno.The algorithm begins at an initial estimatefor the optimal value and proceeds iteratively to get a better estimate at each stage.The search direction pk at stage k is given by the solution of the analogue of the Newton equation: whereis the gradient of the function evaluated at xk.A line search in the direction pk is then used to find the next point xk+1 by minimizingThe quasi-Newton condition imposed on the update ofto be positive definite, which can be verified by pre-multiplying the secant equation withIf the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point xk+1 satisfying the Wolfe conditions, which entail the curvature condition, using line search.Instead of requiring the full Hessian matrix at the point, the approximate Hessian at stage k is updated by the addition of two matrices: Bothare symmetric rank-one matrices, but their sum is a rank-two update matrix.In order to maintain the symmetry and positive definiteness ofThe first step of the algorithm is carried out using the inverse of the matrix, which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving This can be computed efficiently without temporary matrices, recognizing thatare scalars, using an expansion such as Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself:converges to the solution: In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix [citation needed].However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.[10] The BFGS update formula heavily relies on the curvatureHowever, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures.This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search.It is also possible to produce spurious values due to noise in the target.In such cases, one of the so-called damped BFGS updates can be used (see [11]) which modifyin order to obtain a more robust update.