total 20took 0.07s

General Convergence Rates Follow From Specialized Rates Assuming Growth BoundsMay 15 2019Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once ... More

Stochastic model-based minimization under high-order growthJul 01 2018Given a nonsmooth, nonconvex minimization problem, we consider algorithms that iteratively sample and minimize stochastic convex models of the objective function. Assuming that the one-sided approximation quality and the variation of the models is controlled ... More

A geometric integration approach to smooth optimisation: Foundations of the discrete gradient methodMay 16 2018Nov 07 2018Discrete gradient methods are geometric integration techniques that can preserve the dissipative structure of gradient flows. Due to the monotonic decay of the function values, they are well suited for general convex and nonconvex optimisation problems. ... More

Stochastic subgradient method converges on tame functionsApr 20 2018May 26 2018This work considers the question: what convergence guarantees does the stochastic subgradient method have in the absence of smoothness and convexity? We prove that the stochastic subgradient method, on any semialgebraic locally Lipschitz function, produces ... More

Stochastic model-based minimization of weakly convex functionsMar 17 2018Aug 26 2018We consider a family of algorithms that successively sample and minimize simple stochastic models of the objective function. We show that under reasonable conditions on approximation quality and regularity of the models, any such algorithm drives a natural ... More

Subgradient methods for sharp weakly convex functionsMar 06 2018Subgradient methods converge linearly on a convex function that grows sharply away from its solution set. In this work, we show that the same is true for sharp functions that are only weakly convex, provided that the subgradient methods are initialized ... More

Concise Complexity Analyses for Trust-Region MethodsFeb 21 2018Concise complexity analyses are presented for simple trust region algorithms for solving unconstrained optimization problems. In contrast to a traditional trust region algorithm, the algorithms considered in this paper require certain control over the ... More

Complexity of finding near-stationary points of convex functions stochasticallyFeb 21 2018In a recent paper, we showed that the stochastic subgradient method applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$. In this supplementary note, we present a stochastic subgradient method ... More

Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on weakly convex functionsFeb 08 2018Feb 19 2018We prove that the proximal stochastic subgradient method, applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$. As a consequence, we resolve an open question on the convergence rate of the proximal ... More

Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on weakly convex functionsFeb 08 2018We prove that the projected stochastic subgradient method, applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$.

Modern Regularization Methods for Inverse ProblemsJan 30 2018Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and make the approximation of ill-posed (pseudo-)inverses feasible. In the last two decades interest has shifted from linear towards ... More

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz ContinuityDec 12 2017Feb 26 2018We extend the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions. For the deterministic projected subgradient method, we present a global $O(1/\sqrt{T})$ convergence rate for any convex function which is locally ... More

Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex ProblemsJul 12 2017Sep 18 2018In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions---a wide class of functions which includes the additive and convex composite classes. At a high-level, ... More

A Numerical Approach to Optimal Coherent Quantum LQG Controller Design Using Gradient DescentSep 24 2016This paper is concerned with coherent quantum linear quadratic Gaussian (CQLQG) control. The problem is to find a stabilizing measurement-free quantum controller for a quantum plant so as to minimize a mean square cost for the fully quantum closed-loop ... More

Energy-conserving time-discretisation of abstract dynamic problems with applications in continuum mechanics of solidsMay 31 2016An abstract 2nd-order evolution equation or inclusion is discretised in time in such a way that the energy is conserved at least in qualified cases, typically in the cases when the governing energy is component-wise quadratic or "slightly-perturbed" quadratic. ... More

Stress-driven solution to rate-independent elasto-plasticity with damage at small strains and its computer implementationJun 03 2015The quasistatic rate-independent damage combined with linearized plasticity with hardening at small strains is investigated. The fractional-step time discretisation is devised with the purpose to obtain a numerically efficient scheme converging possibly ... More

Perfect plasticity with damage and healing at small strains, its modelling, analysis, and computer implementationMay 05 2015The quasistatic, Prandtl-Reuss perfect plasticity at small strains is combined with a gradient, reversible (i.e. admitting healing) damage which influences both the elastic moduli and the yield stress. Existence of weak solutions of the resulted system ... More

A Gradient Descent Approach to Optimal Coherent Quantum LQG Controller DesignFeb 01 2015This paper is concerned with the Coherent Quantum Linear Quadratic Gaussian (CQLQG) control problem of finding a stabilizing measurement-free quantum controller for a quantum plant so as to minimize an infinite-horizon mean square performance index for ... More

Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical ExperienceAug 20 2014In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust ... More

Alternating Projections and Douglas-Rachford for Sparse Affine FeasibilityJul 08 2013Mar 14 2014The problem of finding a vector with the fewest nonzero elements that satisfies an underdetermined system of linear equations is an NP-complete problem that is typically solved numerically via convex heuristics or nicely-behaved nonconvex relaxations. ... More