NumericalGradient¶
-
class
odl.solvers.functional.derivatives.
NumericalGradient
(*args, **kwargs)[source]¶ Bases:
odl.operator.operator.Operator
The gradient of a
Functional
computed by finite differences.See also
NumericalDerivative
Compute directional derivative
- Attributes
adjoint
Adjoint of this operator (abstract).
domain
Set of objects on which this operator can be evaluated.
inverse
Return the operator inverse.
is_functional
True
if this operator’s range is aField
.is_linear
True
if this operator is linear.range
Set in which the result of an evaluation of this operator lies.
Methods
_call
(self, x)Return
self(x)
.derivative
(self, point)Return the derivative in
point
.norm
(self[, estimate])Return the operator norm of this operator.
-
__init__
(self, functional, method='forward', step=None)[source]¶ Initialize a new instance.
- Parameters
- functional
Functional
The functional whose gradient should be computed. Its domain must be a
TensorSpace
.- method{‘backward’, ‘forward’, ‘central’}, optional
The method to use to compute the gradient.
- stepfloat, optional
The step length used in the derivative computation. Default: selects the step according to the dtype of the space.
- functional
abla f(x))_i = rac{f(x) - f(x - h e_i)}{h}
method='forward'
:abla f(x))_i = rac{f(x + h e_i) - f(x)}{h}
method='central'
:abla f(x))_i = rac{f(x + (h/2) e_i) - f(x - (h/2) e_i)}{h}
The number of function evaluations is
functional.domain.size + 1
if'backward'
or'forward'
is used and2 * functional.domain.size
if'central'
is used. On large domains this will be computationally infeasible.Examples
>>> space = odl.rn(3) >>> func = odl.solvers.L2NormSquared(space) >>> grad = NumericalGradient(func) >>> grad([1, 1, 1]) rn(3).element([ 2., 2., 2.])
The gradient gives the correct value with sufficiently small step size:
>>> grad([1, 1, 1]) == func.gradient([1, 1, 1]) True
If the step is too large the result is not correct:
>>> grad = NumericalGradient(func, step=0.5) >>> grad([1, 1, 1]) rn(3).element([ 2.5, 2.5, 2.5])
But it can be improved by using the more accurate
method='central'
:>>> grad = NumericalGradient(func, method='central', step=0.5) >>> grad([1, 1, 1]) rn(3).element([ 2., 2., 2.])