kaczmarz

odl.solvers.iterative.iterative.kaczmarz(ops, x, rhs, niter, omega=1, projection=None, random=False, callback=None, callback_loop='outer')[source]

Optimized implementation of Kaczmarz’s method.

Solves the inverse problem given by the set of equations:

A_n(x) = rhs_n

This is also known as the Landweber-Kaczmarz’s method, since the method coincides with the Landweber method for a single operator.

Parameters
opssequence of Operator’s

Operators in the inverse problem. op[i].derivative(x).adjoint must be well-defined for x in the operator domain and for all i.

xop.domain element

Element to which the result is written. Its initial value is used as starting point of the iteration, and its values are updated in each iteration step.

rhssequence of ops[i].range elements

Right-hand side of the equation defining the inverse problem.

niterint

Number of iterations.

omegapositive float or sequence of positive floats, optional

Relaxation parameter in the iteration. If a single float is given the same step is used for all operators, otherwise separate steps are used.

projectioncallable, optional

Function that can be used to modify the iterates in each iteration, for example enforcing positivity. The function should take one argument and modify it in-place.

randombool, optional

If True, the order of the operators is randomized in each iteration.

callbackcallable, optional

Object executing code per iteration, e.g. plotting each iterate.

callback_loop{‘inner’, ‘outer’}

Whether the callback should be called in the inner or outer loop.

See also

landweber

Notes

This method calculates an approximate least-squares solution of the inverse problem of the first kind

\mathcal{A}_i (x) = y_i \quad 1 \leq i \leq n,

for a given y_n \in \mathcal{Y}_n, i.e. an approximate solution x^* to

\min_{x\in \mathcal{X}}
\sum_{i=1}^n \| \mathcal{A}_i(x) - y_i \|_{\mathcal{Y}_i}^2

for a (Frechet-) differentiable operator \mathcal{A}: \mathcal{X} \to \mathcal{Y} between Hilbert spaces \mathcal{X} and \mathcal{Y}. The method starts from an initial guess x_0 and uses the iteration

x_{k+1} = x_k - \omega_{[k]} \ \partial \mathcal{A}_{[k]}(x_k)^*
                         (\mathcal{A}_{[k]}(x_k) - y_{[k]}),

where \partial \mathcal{A}_{[k]}(x_k) is the Frechet derivative of \mathcal{A}_{[k]} at x_k, \omega_{[k]} is a relaxation parameter and [k] := k \text{ mod } n.

For linear problems, a choice 0 < \omega_i < 2/\lVert \mathcal{A}_{i}^2\rVert guarantees convergence, where \|\mathcal{A}_{i}\| stands for the operator norm of \mathcal{A}_{i}.

This implementation uses a minimum amount of memory copies by applying re-usable temporaries and in-place evaluation.

The method is also described in a Wikipedia article. and in Natterer, F. Mathematical Methods in Image Reconstruction, section 5.3.2.