adupdates

odl.solvers.nonsmooth.alternating_dual_updates.adupdates(x, g, L, stepsize, inner_stepsizes, niter, random=False, callback=None, callback_loop='outer')[source]

Alternating Dual updates method.

The Alternating Dual (AD) updates method of McGaffin and Fessler [MF2015] is designed to solve an optimization problem of the form

min_x [ sum_i g_i(L_i x) ]

where g_i are proper, convex and lower semicontinuous functions and L_i are linear Operator s.

Parameters:
gsequence of Functional s

All functions need to provide a Functional.convex_conj with a Functional.proximal factory.

Lsequence of Operator s

Length of L must equal the length of g.

xLinearSpaceElement

Initial point, updated in-place.

stepsizepositive float

The stepsize for the outer (proximal point) iteration. The theory guarantees convergence for any positive real number, but the performance might depend on the choice of a good stepsize.

inner_stepsizessequence of stepsizes

Parameters determining the stepsizes for the inner iterations. Must be matched with the norms of L, and convergence is guaranteed if the inner_stepsizes are small enough. See the Notes section for details.

niterint

Number of (outer) iterations.

randombool, optional

If True, the order of the dual upgdates is chosen randomly, otherwise the order provided by the lists g, L and inner_stepsizes is used.

callbackcallable, optional

Function called with the current iterate after each iteration.

callback_loop{'inner', 'outer'}, optional

If 'inner', the callback function is called after each inner iteration, i.e., after each dual update. If 'outer', the callback function is called after each outer iteration, i.e., after each primal update.

Notes

The algorithm as implemented here is described in the article [MF2015], where it is applied to a tomography problem. It solves the problem

\min_x \sum_{i=1}^m g_i(L_i x),

where g_i are proper, convex and lower semicontinuous functions and L_i are linear, continuous operators for i = 1, \ldots, m. In an outer iteration, the solution is found iteratively by an iteration

x_{n+1} = \mathrm{arg\,min}_x \sum_{i=1}^m g_i(L_i x)
    + \frac{\mu}{2} \|x - x_n\|^2

with some stepsize parameter \mu > 0 according to the proximal point algorithm. In the inner iteration, dual variables are introduced for each of the components of the sum. The Lagrangian of the problem is given by

S_n(x; v_1, \ldots, v_m) = \sum_{i=1}^m (\langle v_i, L_i x \rangle
    - g_i^*(v_i)) + \frac{\mu}{2} \|x - x_n\|^2.

Given the dual variables, the new primal variable x_{n+1} can be calculated by directly minimizing S_n with respect to x. This corresponds to the formula

x_{n+1} = x_n - \frac{1}{\mu} \sum_{i=1}^m L_i^* v_i.

The dual updates are executed according to the following rule:

v_i^+ = \mathrm{Prox}^{\mu M_i^{-1}}_{g_i^*}
    (v_i + \mu M_i^{-1} L_i x_{n+1}),

where x_{n+1} is given by the formula above and M_i is a diagonal matrix with positive diagonal entries such that M_i - L_i L_i^* is positive semidefinite. The variable inner_stepsizes is chosen as a stepsize to the Functional.proximal to the Functional.convex_conj of each of the g s after multiplying with stepsize. The inner_stepsizes contain the elements of M_i^{-1} in one of the following ways:

  • Setting inner_stepsizes[i] a positive float \gamma corresponds to the choice M_i^{-1} = \gamma \mathrm{Id}.

  • Assume that g_i is a SeparableSum, then setting inner_stepsizes[i] a list (\gamma_1, \ldots, \gamma_k) of positive floats corresponds to the choice of a block-diagonal matrix M_i^{-1}, where each block corresponds to one of the space components and equals \gamma_i \mathrm{Id}.

  • Assume that g_i is an L1Norm or an L2NormSquared, then setting inner_stepsizes[i] a g_i.domain.element z corresponds to the choice M_i^{-1} = \mathrm{diag}(z).

References

[MF2015] McGaffin, M G, and Fessler, J A. Alternating dual updates algorithm for X-ray CT reconstruction on the GPU. IEEE Transactions on Computational Imaging, 1.3 (2015), pp 186--199.