conjugate_gradient_normal

odl.solvers.iterative.iterative.conjugate_gradient_normal(op, x, rhs, niter=1, callback=None)[source]

Optimized implementation of CG for the normal equation.

This method solves the inverse problem (of the first kind)

A(x) == rhs

with a linear Operator A by looking at the normal equation

A.adjoint(A(x)) == A.adjoint(rhs)

It uses a minimum amount of memory copies by applying re-usable temporaries and in-place evaluation.

The method is described (for linear systems) in a Wikipedia article.

Parameters:
opOperator

Operator in the inverse problem. If not linear, it must have an implementation of Operator.derivative, which in turn must implement Operator.adjoint, i.e. the call op.derivative(x).adjoint must be valid.

xop.domain element

Element to which the result is written. Its initial value is used as starting point of the iteration, and its values are updated in each iteration step.

rhsop.range element

Right-hand side of the equation defining the inverse problem

niterint

Number of iterations.

callbackcallable, optional

Object executing code per iteration, e.g. plotting each iterate.

See also

conjugate_gradient

Optimized solver for symmetric matrices

odl.solvers.smooth.nonlinear_cg.conjugate_gradient_nonlinear

Equivalent solver for the nonlinear case