Gradient¶
-
class
odl.discr.diff_ops.
Gradient
(*args, **kwargs)[source]¶ Bases:
odl.operator.tensor_ops.PointwiseTensorFieldOperator
Spatial gradient operator for
DiscretizedSpace
spaces.Calls helper function
finite_diff
to calculate each component of the resulting product space element. For the adjoint of theGradient
operator, zero padding is assumed to match the negativeDivergence
operator- Attributes
adjoint
Adjoint of this operator.
base_space
Base space
X
of this operator’s domain and range.domain
Set of objects on which this operator can be evaluated.
inverse
Return the operator inverse.
is_functional
True
if this operator’s range is aField
.is_linear
True
if this operator is linear.range
Set in which the result of an evaluation of this operator lies.
Methods
_call
(self, x[, out])Calculate the spatial gradient of
x
.derivative
(self[, point])Return the derivative operator.
norm
(self[, estimate])Return the operator norm of this operator.
-
__init__
(self, domain=None, range=None, method='forward', pad_mode='constant', pad_const=0)[source]¶ Initialize a new instance.
Zero padding is assumed for the adjoint of the
Gradient
operator to match negativeDivergence
operator.- Parameters
- domain
DiscretizedSpace
, optional Space of elements which the operator acts on. This is required if
range
is not given.- rangepower space of
DiscretizedSpace
, optional Space of elements to which the operator maps. This is required if
domain
is not given.- method{‘forward’, ‘backward’, ‘central’}, optional
Finite difference method to be used.
- pad_modestring, optional
The padding mode to use outside the domain.
'constant'
: Fill withpad_const
.'symmetric'
: Reflect at the boundaries, not doubling the outmost values.'periodic'
: Fill in values from the other side, keeping the order.'order0'
: Extend constantly with the outmost values (ensures continuity).'order1'
: Extend with constant slope (ensures continuity of the first derivative). This requires at least 2 values along each axis where padding is applied.'order2'
: Extend with second order accuracy (ensures continuity of the second derivative). This requires at least 3 values along each axis.- pad_constfloat, optional
For
pad_mode == 'constant'
,f
assumespad_const
for indices outside the domain off
- domain
Examples
Creating a Gradient operator:
>>> dom = odl.uniform_discr([0, 0], [1, 1], (10, 20)) >>> ran = odl.ProductSpace(dom, dom.ndim) # 2-dimensional >>> grad_op = Gradient(dom) >>> grad_op.range == ran True >>> grad_op2 = Gradient(range=ran) >>> grad_op2.domain == dom True >>> grad_op3 = Gradient(domain=dom, range=ran) >>> grad_op3.domain == dom True >>> grad_op3.range == ran True
Calling the operator:
>>> data = np.array([[ 0., 1., 2., 3., 4.], ... [ 0., 2., 4., 6., 8.]]) >>> discr = odl.uniform_discr([0, 0], [2, 5], data.shape) >>> f = discr.element(data) >>> grad = Gradient(discr) >>> grad_f = grad(f) >>> grad_f[0] uniform_discr([ 0., 0.], [ 2., 5.], (2, 5)).element( [[ 0., 1., 2., 3., 4.], [ 0., -2., -4., -6., -8.]] ) >>> grad_f[1] uniform_discr([ 0., 0.], [ 2., 5.], (2, 5)).element( [[ 1., 1., 1., 1., -4.], [ 2., 2., 2., 2., -8.]] )
Verify adjoint:
>>> g = grad.range.element((data, data ** 2)) >>> adj_g = grad.adjoint(g) >>> adj_g uniform_discr([ 0., 0.], [ 2., 5.], (2, 5)).element( [[ 0., -2., -5., -8., -11.], [ 0., -5., -14., -23., -32.]] ) >>> g.inner(grad_f) / f.inner(adj_g) 1.0