conjugate_gradient_normal¶
- odl.solvers.iterative.iterative.conjugate_gradient_normal(op, x, rhs, niter=1, callback=None)[source]¶
Optimized implementation of CG for the normal equation.
This method solves the inverse problem (of the first kind)
A(x) == rhs
with a linear
Operator
A
by looking at the normal equationA.adjoint(A(x)) == A.adjoint(rhs)
It uses a minimum amount of memory copies by applying re-usable temporaries and in-place evaluation.
The method is described (for linear systems) in a Wikipedia article.
- Parameters:
- op
Operator
Operator in the inverse problem. If not linear, it must have an implementation of
Operator.derivative
, which in turn must implementOperator.adjoint
, i.e. the callop.derivative(x).adjoint
must be valid.- x
op.domain
element Element to which the result is written. Its initial value is used as starting point of the iteration, and its values are updated in each iteration step.
- rhs
op.range
element Right-hand side of the equation defining the inverse problem
- niterint
Number of iterations.
- callbackcallable, optional
Object executing code per iteration, e.g. plotting each iterate.
- op
See also
conjugate_gradient
Optimized solver for symmetric matrices
odl.solvers.smooth.nonlinear_cg.conjugate_gradient_nonlinear
Equivalent solver for the nonlinear case