Divergence

class odl.discr.diff_ops.Divergence(*args, **kwargs)[source]

Bases: odl.operator.tensor_ops.PointwiseTensorFieldOperator

Divergence operator for DiscretizedSpace spaces.

Calls helper function finite_diff for each component of the input product space vector. For the adjoint of the Divergence operator to match the negative Gradient operator implicit zero is assumed.

Attributes
adjoint

Adjoint of this operator.

base_space

Base space X of this operator’s domain and range.

domain

Set of objects on which this operator can be evaluated.

inverse

Return the operator inverse.

is_functional

True if this operator’s range is a Field.

is_linear

True if this operator is linear.

range

Set in which the result of an evaluation of this operator lies.

Methods

_call(self, x[, out])

Calculate the divergence of x.

derivative(self[, point])

Return the derivative operator.

norm(self[, estimate])

Return the operator norm of this operator.

__init__(self, domain=None, range=None, method='forward', pad_mode='constant', pad_const=0)[source]

Initialize a new instance.

Zero padding is assumed for the adjoint of the Divergence operator to match the negative Gradient operator.

Parameters
domainpower space of DiscretizedSpace, optional

Space of elements which the operator acts on. This is required if range is not given.

rangeDiscretizedSpace, optional

Space of elements to which the operator maps. This is required if domain is not given.

method{‘forward’, ‘backward’, ‘central’}, optional

Finite difference method to be used

pad_modestring, optional

The padding mode to use outside the domain.

'constant': Fill with pad_const.

'symmetric': Reflect at the boundaries, not doubling the

'periodic': Fill in values from the other side, keeping the order.

'order0': Extend constantly with the outmost values (ensures continuity).

'order1': Extend with constant slope (ensures continuity of the first derivative). This requires at least 2 values along each axis.

'order2': Extend with second order accuracy (ensures continuity of the second derivative). This requires at least 3 values along each axis.

pad_constfloat, optional

For pad_mode == 'constant', f assumes pad_const for indices outside the domain of f

Examples

Initialize a Divergence opeator:

>>> ran = odl.uniform_discr([0, 0], [3, 5], (3, 5))
>>> dom = odl.ProductSpace(ran, ran.ndim)  # 2-dimensional
>>> div = Divergence(dom)
>>> div.range == ran
True
>>> div2 = Divergence(range=ran)
>>> div2.domain == dom
True
>>> div3 = Divergence(domain=dom, range=ran)
>>> div3.domain == dom
True
>>> div3.range == ran
True

Call the operator:

>>> data = np.array([[0., 1., 2., 3., 4.],
...                  [1., 2., 3., 4., 5.],
...                  [2., 3., 4., 5., 6.]])
>>> f = div.domain.element([data, data])
>>> div_f = div(f)
>>> print(div_f)
[[  2.,   2.,   2.,   2.,  -3.],
 [  2.,   2.,   2.,   2.,  -4.],
 [ -1.,  -2.,  -3.,  -4., -12.]]

Verify adjoint:

>>> g = div.range.element(data ** 2)
>>> adj_div_g = div.adjoint(g)
>>> g.inner(div_f) / f.inner(adj_div_g)
1.0