Gradient¶
- class odl.discr.diff_ops.Gradient(*args, **kwargs)[source]¶
Bases:
PointwiseTensorFieldOperatorSpatial gradient operator for
DiscretizedSpacespaces.Calls helper function
finite_diffto calculate each component of the resulting product space element. For the adjoint of theGradientoperator, zero padding is assumed to match the negativeDivergenceoperator- Attributes:
adjointAdjoint of this operator.
base_spaceBase space
Xof this operator's domain and range.domainSet of objects on which this operator can be evaluated.
inverseReturn the operator inverse.
is_functionalTrueif this operator's range is aField.is_linearTrueif this operator is linear.rangeSet in which the result of an evaluation of this operator lies.
Methods
__call__(x[, out])Return
self(x[, out, **kwargs]).derivative([point])Return the derivative operator.
norm([estimate])Return the operator norm of this operator.
- __init__(domain=None, range=None, method='forward', pad_mode='constant', pad_const=0)[source]¶
Initialize a new instance.
Zero padding is assumed for the adjoint of the
Gradientoperator to match negativeDivergenceoperator.- Parameters:
- domain
DiscretizedSpace, optional Space of elements which the operator acts on. This is required if
rangeis not given.- rangepower space of
DiscretizedSpace, optional Space of elements to which the operator maps. This is required if
domainis not given.- method{'forward', 'backward', 'central'}, optional
Finite difference method to be used.
- pad_modestring, optional
The padding mode to use outside the domain.
'constant': Fill withpad_const.'symmetric': Reflect at the boundaries, not doubling the outmost values.'periodic': Fill in values from the other side, keeping the order.'order0': Extend constantly with the outmost values (ensures continuity).'order1': Extend with constant slope (ensures continuity of the first derivative). This requires at least 2 values along each axis where padding is applied.'order2': Extend with second order accuracy (ensures continuity of the second derivative). This requires at least 3 values along each axis.- pad_constfloat, optional
For
pad_mode == 'constant',fassumespad_constfor indices outside the domain off
- domain
Examples
Creating a Gradient operator:
>>> dom = odl.uniform_discr([0, 0], [1, 1], (10, 20)) >>> ran = odl.ProductSpace(dom, dom.ndim) # 2-dimensional >>> grad_op = Gradient(dom) >>> grad_op.range == ran True >>> grad_op2 = Gradient(range=ran) >>> grad_op2.domain == dom True >>> grad_op3 = Gradient(domain=dom, range=ran) >>> grad_op3.domain == dom True >>> grad_op3.range == ran True
Calling the operator:
>>> data = np.array([[ 0., 1., 2., 3., 4.], ... [ 0., 2., 4., 6., 8.]]) >>> discr = odl.uniform_discr([0, 0], [2, 5], data.shape) >>> f = discr.element(data) >>> grad = Gradient(discr) >>> grad_f = grad(f) >>> grad_f[0] uniform_discr([ 0., 0.], [ 2., 5.], (2, 5)).element( [[ 0., 1., 2., 3., 4.], [ 0., -2., -4., -6., -8.]] ) >>> grad_f[1] uniform_discr([ 0., 0.], [ 2., 5.], (2, 5)).element( [[ 1., 1., 1., 1., -4.], [ 2., 2., 2., 2., -8.]] )
Verify adjoint:
>>> g = grad.range.element((data, data ** 2)) >>> adj_g = grad.adjoint(g) >>> adj_g uniform_discr([ 0., 0.], [ 2., 5.], (2, 5)).element( [[ 0., -2., -5., -8., -11.], [ 0., -5., -14., -23., -32.]] ) >>> g.inner(grad_f) / f.inner(adj_g) 1.0