bfgs_method

odl.solvers.smooth.newton.bfgs_method(f, x, line_search=1.0, maxiter=1000, tol=1e-15, num_store=None, hessinv_estimate=None, callback=None)[source]

Quasi-Newton BFGS method to minimize a differentiable function.

Parameters:
fFunctional

Functional with f.gradient.

xf.domain element

Starting point of the iteration

line_searchfloat or LineSearch, optional

Strategy to choose the step length. If a float is given, uses it as a fixed step length.

maxiterint, optional

Maximum number of iterations.

tolfloat, optional

Tolerance that should be used for terminating the iteration.

num_storeint, optional

Maximum number of correction factors to store. For None, the method is the regular BFGS method. For an integer, the method becomes the Limited Memory BFGS method.

hessinv_estimateOperator, optional

Initial estimate of the inverse of the Hessian operator. Needs to be an operator from f.domain to f.domain. Default: Identity on f.domain

callbackcallable, optional

Object executing code per iteration, e.g. plotting each iterate.

Notes

Can use either the regular BFGS method, or the limited memory BFGS method.

This is a general and optimized implementation of a quasi-Newton method with BFGS update for solving a general unconstrained optimization problem

\min f(x)

for a differentiable function f: \mathcal{X}\to \mathbb{R} on a Hilbert space \mathcal{X}. It does so by finding a zero of the gradient

\nabla f: \mathcal{X} \to \mathcal{X}.

The QN method is an approximate Newton method, where the Hessian is approximated and gradually updated in each step. This implementation uses the rank-one BFGS update schema where the inverse of the Hessian is recalculated in each iteration.

The algorithm is described in [GNS2009], Section 12.3 and in the BFGS Wikipedia article.

References

[GNS2009] Griva, I, Nash, S G, and Sofer, A. Linear and nonlinear optimization. Siam, 2009.