Veröffenlichungen
 A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity Y. Malitsky and M.K. Tam Abstract: In this work, we propose a simple modification of the forward-backward splitting method for finding a zero in the sum of two monotone operators. Our method converges under the same assumptions as Tseng's forward-backward-forward method, namely, it does not require cocoercivity of the single-valued operator. Moreover, each iteration only requires one forward evaluation rather than two as is the case for Tseng's method. Variants of the method incorporating a linesearch, relaxation and inertia, or a structured three operator inclusion are also discussed. [arXiv] Shadow Douglas–Rachford Splitting for Monotone Inclusions E.R. Csetnek, Y. Malitsky and M.K. Tam. Applied Mathematics and Optimization (to appear). Abstract: TIn this work, we propose a new algorithm for finding a zero of the sum of two monotone operators where one is assumed to be single-valued and Lipschitz continuous. This algorithm naturally arises from a non-standard discretization of a continuous dynamical system associated with the Douglas–Rachford splitting algorithm. More precisely, it is obtained by performing an explicit, rather than implicit, discretization with respect to one of the operators involved. Each iteration of the proposed algorithm requires the evaluation of one forward and one backward operator. [arXiv] [Link] Golden ratio algorithms for variational inequalities Y. Malitsky. Mathematical Programming (to appear). Abstract: The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator F and a proximal mapping g. The operator F need not be Lipschitz-continuous, which also makes the algorithm interesting in the area of composite minimization where one cannot use the descent lemma. The method exhibits an ergodic O(1/k) convergence rate and R-linear rate, if F,g satisfy the error bound condition. We discuss possible applications of the method to fixed point problems. We discuss possible applications of the method to fixed point problems as well as its different generalizations. [arXiv] [Link] Convergence Analysis of the Relaxed Douglas-Rachford Algorithm D. R. Luke and A.-L. Martins Abstract: Motivated by nonconvex, inconsistent feasibility problems in imaging, the relaxed alternating averaged reflections algorithm, or relaxed Douglas-Rachford algorithm (DRλ), was first proposed over a decade ago. Convergence results for this algorithm are limited either to convex feasibility or consistent nonconvex feasibility with strong assumptions on the regularity of the underlying sets. Using an analytical framework depending only on metric subregularity and pointwise almost averagedness, we analyze the convergence behavior of DRλ for feasibility problems that are both nonconvex and inconsistent. We introduce a new type of regularity of sets, called super-regular at a distance, to establish sufficient conditions for local linear convergence of the corresponding sequence. These results subsume and extend existing results for this algorithm. [arXiv] [Sage worksheets] Optimization on Spheres: Models and Proximal Algorithms with Computational Performance Comparisons D. R. Luke, S. Sabach and M. Teboulle. SIAM J. on Mathematical Data Science (to appear). Abstract: We present a unified treatment of the abstract problem of finding the best approximation between a cone and spheres in the image of affine transformations. Prominent instances of this problem are phase retrieval and source localization. The common geometry binding these problems permits a generic application of algorithmic ideas and abstract convergence results for nonconvex optimization. We organize variational models for this problem into three different classes and derive the main algorithmic approaches within these classes (13 in all). We identify the central ideas underlying these methods and provide thorough numerical benchmarks comparing their performance on synthetic and laboratory data. The software and data of our experiments are all publicly accessible. We also introduce one new algorithm, a cyclic relaxed Douglas-Rachford algorithm, which outperforms all other algorithms by every measure: speed, stability and accuracy. The analysis of this algorithm remains open. [preprint] Tangent and Normal Cones for Low-Rank Matrices S. Hosseini, D. R. Luke and A. Uschmajew, in Nonsmooth Optimization and Its Applications, Hosseini, Seyedehsomayeh, Mordukhovich, Boris S., Uschmajew, Andre (Eds.), Birkhauser (2019). Abstract: In [D. R. Luke,J. Math. Imaging Vision, 47(3):231-238, 2013] the structure of the Mordukhovich normal cone to varieties of low rank matrices at rank-deficient points has been determined. A simplified proof of that result is presented here. As a corollary we obtain the corresponding Clarke normal cone. The results are put into context of first-order optimality conditions for low-rank matrix optimization problems. [preprint] [book chapter] Random Function Iterations for Consistent Stochastic Feasibility Neal Hermer, D. Russell Luke and Anja Sturm Numerical Functional Analysis and Optimization (accepted 2018). Abstract: We study the convergence of iterated random functions for stochastic feasibility in the consistent case (in the sense of Butnariu and Flåm [ButnariuFlam1995]) in several different settings, under decreasingly restrictive regularity assumptions of the fixed point mappings. The iterations are Markov chains and, for the purposes of this study, convergence is understood in very restrictive terms. We show that sufficient conditions for geometric (linear) convergence in expectation of stochastic projection algorithms presented in Nedi\'c [Nedic2011], are in fact necessary for geometric (linear) convergence in expectation more generally of iterated random functions. [arXiv] [article] Necessary conditions for linear convergence of iterated expansive, set-valued mappings D. Russell Luke , Marc Teboulle and Nguyen H. Thao Mathematical Programming A (accepted 2018). Abstract: We present necessary conditions for monotonicity of fixed point iterations of mappings that may violate the usual nonexpansive property. Notions of linear-type monotonicity of fixed point sequences – weaker than Fejér monotonicity – are shown to imply metric subregularity. This, together with the almost averaging property recently introduced by Luke, Tam and Thao [25], guarantees linear convergence of the sequence to a fixed point. We specialize these results to the alternating projections iteration where the metric subregularity property takes on a distinct geometric characterization of sets at points of intersection called subtransversality. Subtransversality is shown to be necessary for linear convergence of alternating projections for consistent feasibility. [arXiv] [article] Characterizations of Super-regularity and its Variants Aris Daniilidis, D. Russell Luke, Matthew K. Tam, in Splitting Algorithms, Modern Operator Theory, and Applications, Heinz H. Bauschke, Regina Burachik, D. Russell Luke eds. Springer, 2019. Abstract: Convergence of projection-based methods for nonconvex set feasibility problems has been established for sets with ever weaker regularity assumptions. What has not kept pace with these developments is analogous results for convergence of optimization problems with correspondingly weak assumptions on the value functions. Indeed, one of the earliest classes of nonconvex sets for which convergence results were obtainable, the class of so-called super-regular sets introduced by Lewis, Luke and Malick (2009), has no functional counterpart. In this work, we amend this gap in the theory by establishing the equivalence between a property slightly stronger than super-regularity, which we call Clarke super-regularity, and subsmootheness of sets as introduced by Aussel, Daniilidis and Thibault (2004). The bridge to functions shows that approximately convex functions studied by Ngai, Luc and Théra (2000) are those which have Clarke super-regular epigraphs. Further classes of regularity of functions based on the corresponding regularity of their epigraph are also discussed. [arXiv] Quantitative convergence analysis of iterated expansive, set-valued mappings D. R. Luke, M.K. Tam and N.H. Thao, Mathematics of Operations Research  43(4):1143--1176 (2018). Abstract: We develop a framework for quantitative convergence analysis of Picard iterations of expansive set-valued fixed point mappings. There are two key components of the analysis. The first is a natural generalization of single-valued averaged mappings to expansive, set-valued mappings that characterizes a type of strong calmness of the fixed point mapping. The second component to this analysis is an extension of the well-established notion of metric subregularity – or inverse calmness – of the mapping at fixed points. Convergence of expansive fixed point iterations is proved using these two properties, and quantitative estimates are a natural byproduct of the framework. To demonstrate the application of the theory, we prove for the first time a number of results showing local linear convergence of nonconvex cyclic projections for inconsistent (and consistent) feasibility problems, local linear convergence of the forward-backward algorithm for structured optimization without convexity, strong or otherwise, and local linear convergence of the Douglas–Rachford algorithm for structured nonconvex minimization. This theory includes earlier approaches for known results, convex and nonconvex, as special cases. [article] Relaxed Cyclic Douglas-Rachford Algorithms for Nonconvex Optimization D. R. Luke, A. Martins aind M.K. Tam, ICML 2018 Workshop: Modern Trends in Nonconvex Optimization for Machine Learning (July 2018). Abstract: We present preliminary analysis and numerical results for cyclic compositions of the relaxed Douglas-Rachford algorithm for nonconvex feasibility problems. The cyclic variants are the most robust against local minima, the fastest and the most accurate compared to other leading algorithms for three common classes of nonconvex optimization investigated here. [icml] Algorithms Based on Unions of Nonexpansive Maps M.K. Tam Optimization Letters, (2018). Abstract: In this note, we consider a framework for the analysis of iterative algorithms which can described in terms of a structured set-valued operator. More precisely, at each point in the ambient space, we assume that the value of operator can be expressed as a finite union of values of single-valued paracontracting operators. Our main result, which shows that the associated fixed point iteration is locally convergent around strong fixed points, generalises a theorem due to Bauschke and Noll (2014). [article] [arXiv] Block-coordinate primal-dual method for nonsmooth minimization over linear constraints D. R. Luke and Y. Malitsky. In Distributed and Large-Scale Optimization edited by P. Giselsson and A. Rantzer. Springer Verlag, 2018. Abstract: We consider the problem of minimizing a convex, separable, nonsmooth function subject to linear constraints. The numerical method we propose is a block-coordinate extension of the Chambolle-Pock primal-dual algorithm. We prove convergence of the method without resorting to assumptions like smoothness or strong convexity of the objective, full-rank condition on the matrix, strong duality or even consistency of the linear system. Freedom from imposing the latter assumption permits convergence guarantees for misspecified or noisy systems. [book] [arXiv] Implicit Error Bounds for Picard Iterations on Hilbert Spaces D. R. Luke, M.K. Tam and N.H. Thao, Vietnam J. Mathematics 46(2), 243-258 (2018). DOI: 10.1007/s10013-018-0279-x. Abstract: We investigate the role of error bounds, or metric subregularity, in the convergence of Picard iterations of nonexpansive maps in Hilbert spaces. Our main results show, on one hand, that the existence of an error bound is sufficient for strong convergence and, on the other hand, that an error bound exists on bounded sets for nonexpansive mappings possessing a fixed point whenever the space is finite dimensional. In a Hilbert space setting, we show that a monotonicity property of the distances of the Picard iterations is all that is needed to guarantee the existence of an error bound. The same monotonicity assumption turns out also to guarantee that the distance of Picard iterates to the fixed point set converges to zero. Our results provide a quantitative characterization of strong convergence as well as new criteria for when strong, as opposed to just weak, convergence holds. [article] A feasibility approach for constructing combinatorial designs of circulant type F.J. Aragón Artacho, R. Campoy, I. Kotsireas, M. K. Tam, Journal of Combinatorial Optimization, 35(4):1061--1085, (2018). Abstract: In this work, we propose an optimization approach for constructing various classes of circulant combinatorial designs that can be defined in terms of autocorrelations. The problem is formulated as a so-called feasibility problem having three sets, to which the Douglas-Rachford projection algorithm is applied. The approach is illustrated on three different classes of circulant combinatorial designs: circulant weighing matrices, D-optimal matrices, and Hadamard matrices with two circulant cores. Furthermore, we explicitly construct two new circulant weighing matrices, a CW(126,64) and a CW(198,100), whose existence was previously marked as unresolved in the most recent version of Strassler's table. [article] [arXiv] A First-Order Primal-Dual Algorithm with Linesearch, Yura Malitsky, Tom Pock, SIAM Journal on Optimization, 28(1):411--432, (2018), DOI 10.1137/16M1092015, arXiv: 1608.08883 Abstract:The paper proposes a linesearch for a primal-dual method. Each iteration of the linesearch requires an update of only the dual (or primal) variable. For many problems, in particular for regularized least squares, the linesearch does not require any additional matrix-vector multiplications. We prove convergence of the proposed method under standard assumptions. We also show an ergodic $O(1/N)$ rate of convergence for our method. In the case when one or both of the prox-functions are strongly convex, we modify our basic method to get a better convergence rate. Finally, we propose a linesearch for a saddle-point problem with an additional smooth term. Several numerical experiments confirm the efficiency of our proposed methods. [article] [arXiv] Symbolic computation with monotone operators F. Lauster, D. R. Luke and M.K. Tam, J. Set-Valued and Variational Analysis (2018) 26:353--368, DOI 10.1007/s11228-017-0418-7 (arXiv:1703.05946) Abstract: We consider a class of monotone operators which are appropriate for symbolic representation and manipulation within a computer algebra system. Various structural properties of the class (e.g., closure under taking inverses, resolvents) are investigated as well as the role played by maximal monotonicity within the class. In particular, we show that there is a natural correspondence between our class of monotone operators and the subdifferentials of convex functions belonging to a class of convex functions deemed suitable for symbolic computation of Fenchel conjugates which were previously studied by Bauschke & von Mohrenschildt and by Borwein & Hamilton. A number of illustrative examples utilizing the introduced class of operators are provided including computation of proximity operators, recovery of a convex penalty function associated with the hard thresholding operator, and computation of superexpectations, superdistributions and superquantiles with specialization to risk measures. [arXiv] [article] [Code] Proximal extrapolated gradient methods for variational inequalities Yura Malitsky, Optimization Methods and Software 33(1) 140--164 (2018), DOI 10.1080/10556788.2017.1300899 (arXiv:1601.04001) Abstract: The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector–vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising. [arXiv] [article] A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems D. R. Luke and R. Shefi J. Math. Anal. Appl. 457(2), 1568--1590 (2018). Abstract: We study the Proximal Alternating Predictor-Corrector (PAPC) algorithm introduced recently by Drori, Sabach and Teboulle to solve nonsmooth structured convex-concave saddle point problems consisting of the sum of a smooth convex function, a finite collection of nonsmooth convex functions and bilinear terms. We introduce the notion of pointwise quadratic supportability, which is a relaxation of a standard strong convexity assumption and allows us to show that the primal sequence is R-linearly convergent to an optimal solution and the primal-dual sequence is globally Q-linearly convergent. We illustrate the proposed method on total variation denoising problems and on locally adaptive estimation in signal/image deconvolution and denoising with multiresolution statistical constraints. [arXiv] Set Regularities and Feasibility Problems A. Y. Kruger, D. R. Luke, and N. H. Thao, Mathematical Programming B, (2018) 168: 279--311. Abstract: We synthesize and unify notions of regularity, both of individual sets and of collections of sets, as they appear in the convergence theory of projection methods for consistent feasibility problems. Several new characterizations of regularities are presented which shed light on the relations between seemingly different ideas and point to possible necessary conditions for local linear convergence of fundamental algorithms. [Link] [arXiv] Tangent and normal cones for low-rank matrices S. Hosseini, D. R. Luke, A. Uschmajew Institute for Numerical Simmulation Preprint No. 1715, October 2017. Abstract: In [D. R. Luke,J. Math. Imaging Vision, 47(3):231-238, 2013] the structure of the Mordukhovich normal cone to varieties of low rank matrices at rank-deficient points has been determined. A simplified proof of that result is presented here. As a corollary we obtain the corresponding Clarke normal cone. The results are put into context of first-order optimality conditions for low-rank matrix optimization problems. [preprint] About subtransversality of collections of sets A. Y. Kruger, D. R. Luke, and N.H. Thao, Set-Valued and Variational Analysis, 25(4), 701-729 (2017). Abstract: We provide dual sufficient conditions for subtransversality of collections of sets in an Asplund space setting. For the convex case, we formulate a necessary and sufficient dual criterion of subtransversality in general Banach spaces. Our more general results suggest an intermediate notion of subtransversality, what we call {\em weak intrinsic subtransversality}, which lies between intrinsic transversality and subtransversality in Asplund spaces. [article] [arXiv] A Simple Globally Convergent Algorithm for the Nonsmooth Nonconvex Single Source Localization Problem D. R. Luke, S. Sabach and M. Teboulle, J. Global Optimization (2017) 69: 889--909. https://doi.org/10.1007/s10898-017-0545-6. Abstract: We study the single source localization problem which consists of minimizing the squared sum of the errors, also known as the maximum likelihood formulation of the problem. The resulting optimization model is not only nonconvex but is also nonsmooth. We first derive a novel equivalent reformulation as a smooth constrained nonconvex minimization problem. The resulting reformulation allows for deriving a delightfully simple algorithm that does not rely on smoothing or convex relaxations. The proposed algorithm is proven to generate bounded iterates which globally converge to critical points of the original objective function of the source localization problem. Numerical examples are presented to demonstrate the performance of our algorithm. [article] Phase Retrieval, What's New? D. R. Luke, SIAG/OPT Views and News, 25(1):1--5 (2017) Abstract: Ask an engineer to solve a problem and she will come back in a day or so with something that seems to work well enough most of the time. Ask a mathematician to solve the same problem and he will return many months later with an exact but unimplementable solution to a different problem. I'm sure most readers of this newsletter have heard some variation of that joke. But a true story lies somewhere in there, a story that is writ large with the phase retrieval problem. [Link] Regularity Properties of Non-Negative Sparsity Sets M. K. Tam, Journal of Mathematical Analysis and Applications, 447(2):758-777, 2017. Abstract: This paper investigates regularity properties of two non-negative sparsity sets: non-negative sparse vectors, and low-rank positive semi-definite matrices. Novel formulae for their Mordukhovich normal cones are given and used to formulate sufficient conditions for non-convex notions of regularity to hold. Our results provide a useful tool for justifying the application of projection methods to certain rank constrained feasibility problems. [Link] [arXiv] Lagrange multipliers, (exact) regularization and error bounds for monotone variational inequalities C. Charitha, J. Dutta, and D. R. Luke, Mathematical Programming A, 161(1):519-549, 2017. Abstract: We examine two central regularization strategies for monotone variational inequalities, the first a direct regularization of the operative monotone mapping, and the second via regularization of the associated dual gap function. A key link in the relationship between the solution sets to these various regularized problems is the idea of exact regularization, which, in turn, is fundamentally associated with the existence of Lagrange multipliers for the regularized variational inequality. A regularization is said to be exact if a solution to the regularized problem is a solution to the unregularized problem for all parameters beyond a certain value. The Lagrange multipliers corresponding to a particular regularization of a variational inequality, on the other hand, are defined via the dual gap function. Our analysis suggests various conceptual, iteratively regularized numerical schemes, for which we provide error bounds, and hence stopping criteria, under the additional assumption that the solution set to the unregularized problem is what we call weakly sharp of order greater than one. [Link] [arXiv] Regularity of collections of sets and convergence of inexact alternating projections. A. Y. Kruger and N. H. Thao, J. Convex Anal, 23 (2016), no. 3, 823–847. Abstract: In this paper the authors compare the uniform regularity property of collections of sets by A. S. Lewis, D. R. Luke and J. Malick in [Found. Comput. Math. 9(2009),no. 4, 485–513] and relaxed regularity properties given by H. H. Bauschke, Luke, H. M. Phan and X. Wang (BLPW) in [Set-Valued Var. Anal.21 (2013), no. 3,431–473 and 475-501] D. Drusvyatskiy, A. D. Ioffe and Lewis (DIL) in [Found. Comput. Math. 15 (2015),no. 6, 1637–1651], and D. Noll and A. Rondepierre (NR) in [Found. Comput. Math.16(2016), no. 2, 425–455]. Equivalent characterizations of these regularity properties in terms of metric, dual normal cone, and angle are provided. Two examples are provided to show that BLPW-DIL-restricted regularity and DIL-restricted regularity are different. They study inexact alternating projections for two possibly nonconvex sets. Under the assumptions of DIL-restricted regularity and uniform regularity together with the super-regularity of one set, respectively, the authors establish R-linear convergence rates. [Link] [arXiv] Local Linear Convergence of the ADMM/Douglas-Rachford Algorithms without Strong Convexity and Application to Statistical Imaging T. Aspelmeier, C. Charitha, and D. R. Luke, SIAM J. Imaging Sci., 9(2):842-868, 2016. Abstract: We consider the problem of minimizing the sum of a convex function and a convex function composed with an injective linear mapping. For such problems, subject to a coercivity condition at fixed points of the corresponding Picard iteration, iterates of the alternating directions method of multipliers converge locally linearly to points from which the solution to the original problem can be computed. Our proof strategy uses duality and strong metric subregularity of the Douglas-Rachford fixed point mapping. Our analysis does not require strong convexity and yields error bounds to the set of model solutions. We show in particular that convex piecewise linear-quadratic functions naturally satisfy the requirements of the theory, guaranteeing eventual linear convergence of both the Douglas--Rachford algorithm and the alternating directions method of multipliers for this class of objectives under mild assumptions on the set of fixed points. We demonstrate this result on quantitative image deconvolution and denoising with multiresolution statistical constraints. [Link] [Preprint] Proximal Heterogeneous Block Implicit-Explicit Method and Application to Blind Ptychographic Diffraction Imaging R. Hesse, D. R. Luke, S. Sabach, and M. K. Tam, SIAM J. Imaging Sci., 8(1):426-457, 2015. Abstract: We propose a general alternating minimization algorithm for nonconvex optimization problems with separable structure and nonconvex coupling between blocks of variables. To fix our ideas, we apply the methodology to the problem of blind ptychographic imaging. Compared to other schemes in the literature, our approach differs in two ways: (i) it is posed within a clear mathematical framework with practical verifiable assumptions, and (ii) under the given assumptions, it is provably convergent to critical points. A numerical comparison of our proposed algorithm with the current state of the art on simulated and experimental data validates our approach and points toward directions for further improvement. [Link] [arXiv] Quantitative characterizations of regularity properties of collections of sets A. Y. Kruger and Nguyen Thao, J. Optim. Theory Appl, 164 (2015), no. 1, 41-67. Abstract: Several primal and dual quantitative characterizations of regularity properties of collections of sets in normed linear spaces are discussed. Relationships between regularity properties of collections of sets and those of set-valued mappings are provided. [Link] Quantitative characterizations of regularity properties of collections of sets Khanh, Phan Q, A. Y. Kruger and Nguyen Thao, SIAM J. Optim. 25 (2015), no. 4, 2561–2588. Abstract: A general nonlinear regularity model for a set-valued mapping $F:X\times\mathbb{R}_+\rightrightarrows Y$, where $X$ and $Y$ are metric spaces, is studied using special iteration procedures, going back to Banach, Schauder, Lyusternik, and Graves. Namely, we revise the induction theorem from Khanh [J. Math. Anal. Appl., 118 (1986), pp. 519--534] and employ it to obtain basic estimates for exploring regularity/openness properties. We also show that it can serve as a substitution for the Ekeland variational principle when establishing other regularity criteria. Then, we apply the induction theorem and the mentioned estimates to establish criteria for both global and local versions of regularity/openness properties for our model and demonstrate how the definitions and criteria translate into the conventional setting of a set-valued mapping $F:X\rightrightarrows Y$. An application to second-order necessary optimality conditions for a nonsmooth set-valued optimization problem with mixed constraints is provided. [Link] Duality and Convex Programming J. M. Borwein and D. R. Luke, pp 229--270 in Handbook of Mathematical Methods in Imaging, 2nd Edition, Otmar Scherzer, ed. Springer Verlag (2015). Abstract: This chapter surveys key concepts in convex duality theory and their application to the analysis and numerical solution of problem archetypes in imaging. [Link] [Preprint] Activity Identification and Local Linear Convergence of Douglas-Rachford/ADMM under Partial Smoothness J. Liang, J. Fadili, G. Peyre and D. R. Luke Conference on Scale Space and Variational Methods in Computer Vision (2015) Abstract: Convex optimization has become ubiquitous in most quantitative disciplines of science, including variational image processing. Proximal splitting algorithms are becoming popular to solve such structured convex optimization problems. Within this class of algorithms, Douglas- Rachford (DR) and ADMM are designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local convergence behaviour of DR (resp. ADMM) when the involved functions (resp. their Legendre-Fenchel conjugates) are moreover partly smooth. More precisely, when both of the two functions (resp. their conjugates) are partly smooth relative to their respective manifolds, we show that DR (resp. ADMM) identi es these manifolds in nite time. Moreover, when these manifolds are ane or linear, we prove that DR/ADMM is locally linearly convergent with a rate in terms of the cosine of the Friedrichs angle between the tangent spaces of the identi ed manifolds. This is illustrated by several concrete examples and supported by numerical experiments. [arXiv Preprint] Alternating Projections and Douglas-Rachford for Sparse Affine Feasibility R. Hesse, D. R. Luke, and P. Neumann, IEEE Transactions on Signal Processing, 62(18):4868-4881, 2014 Abstract: The problem of finding a vector with the fewest nonzero elements that satisfies an underdetermined system of linear equations is an NP-complete problem that is typically solved numerically via convex heuristics or nicely-behaved nonconvex relaxations. In this work we consider elementary methods based on projections for solving a sparse feasibility problem without employing convex heuristics. It has been shown recently that, locally, the fundamental method of alternating projections must converge linearly to a solution to the sparse feasibility problem with an affine constraint. In this paper we apply different analytical tools that allow us to show global linear convergence of alternating projections under familiar constraint qualifications. These analytical tools can also be applied to other algorithms. This is demonstrated with the prominent Douglas-Rachford algorithm where we establish local linear convergence of this method applied to the sparse affine feasibility problem. [Link] [Preprint] Restricted normal cones and sparsity optimization with affine constraints H. H. Bauschke, D. R. Luke, H. M. Phan, and X. Wang, Foundations of Computational Mathematics 14(1):63--83 (2014). Abstract: The problem of finding a vector with the fewest nonzero elements that satisfies an underdetermined system of linear equations is an NP-complete problem that is typically solved numerically via convex heuristics or nicely-behaved non convex relaxations. In this paper we consider the elementary method of alternating projections (MAP) for solving the sparsity optimization problem without employing convex heuristics. In a parallel paper we recently introduced the restricted normal cone which generalizes the classical Mordukhovich normal cone and reconciles some fundamental gaps in the theory of sufficient conditions for local linear convergence of the MAP algorithm. We use the restricted normal cone together with the notion of superregularity, which is naturally satisfied for the affine sparse optimization problem, to obtain local linear convergence results with estimates for the radius of convergence of the MAP algorithm applied to sparsity optimization with an affine constraint. [Link] [arXiv] Reconstruction of wave front and object for inline holography from a set of detection planes J. Hagemann, A.-L. Robisch, D. R. Luke, C. Homann, T. Hohage, P. Cloetens, H. Suhonen and T. Salditt, Optics Express 22(2014): 11552-11569. Abstract: We illustrate the errors inherent in the conventional empty beam correction of full field X-ray propagation imaging, i.e. the division of intensities in the detection plane measured with an object in the beam by the intensity pattern measured without the object, i.e. the empty beam intensity pattern. The error of this conventional approximation is controlled by the ratio of the source size to the smallest feature in the object, as is shown by numerical simulation. In a second step, we investigate how to overcome the flawed empty beam division by simultaneous reconstruction of the probing wavefront (probe) and of the object, based on measurements in several detection planes (multi-projection approach). The algorithmic scheme is demonstrated numerically and experimentally, using the defocus wavefront of the hard X-ray nanoprobe setup at the European Synchrotron Radiation Facility. [Link] Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems R. Hesse, and D. R. Luke, SIAM J. Optim., 23(4):2397-2419, 2013. Abstract: We consider projection algorithms for solving (nonconvex) feasibility problems in Euclidean spaces. Of special interest are the method of alternating projections (AP) and the Douglas--Rachford algorithm (DR). In the case of convex feasibility, firm nonexpansiveness of projection mappings is a global property that yields global convergence of AP and for consistent problems DR. A notion of local subfirm nonexpansiveness with respect to the intersection is introduced for consistent feasibility problems. This, together with a coercivity condition that relates to the regularity of the collection of sets at points in the intersection, yields local linear convergence of AP for a wide class of nonconvex problems and even local linear convergence of nonconvex instances of the DR algorithm. [Link] [arXiv] Restricted normal cones and the method of alternating projections: applications H. H. Bauschke, D. R. Luke, H. M. Phan, and X. Wang, J. Set-Valued and Variational Analysis 21:475--501 (2013) Abstract: The method of alternating projections (MAP) is a common method for solving feasibility prob- lems. While employed traditionally to subspaces or to convex sets, little was known about the behavior of the MAP in the nonconvex case until 2009, when Lewis, Luke, and Malick de- rived local linear convergence results provided that a condition involving normal cones holds and at least one of the sets is superregular (a property less restrictive than convexity). How- ever, their results failed to capture very simple classical convex instances such as two lines in a three-dimensional space. In this paper, we extend and develop the Lewis-Luke-Malick framework so that not only any two linear subspaces but also any two closed convex sets whose relative interiors meet are covered. We also allow for sets that are more structured such as unions of convex sets. The key tool required is the restricted normal cone, which is a generalization of the classical Mordukhovich normal cone. Numerous examples are provided to illustrate the theory. [Link] [Preprint] Restricted normal cones and the method of alternating projections: theory H. H. Bauschke, D. R. Luke, H. M. Phan, and X. Wang, J. Set-Valued and Variational Analysis 21:431--473 (2013) Abstract: n this paper, we introduce and develop the theory of restricted normal cones which generalize the classical Mordukhovich normal cone. We thoroughly study these objects from the viewpoint of constraint qualifications and regularity. Numerous examples are provided to illustrate the theory. This work provides the theoretical underpinning for a subsequent article in which these tools are applied to obtain a convergence analysis of the method of alternating projections for nonconvex sets. [Link] [Preprint]

Institut für Numerische und
Angewandte Mathematik
University of Göttingen
Lotzestr. 16-18
37083 Göttingen, Deutschland

Anfahrt