Adaptive Smoothing with Restarting for Stochastic Convex Optimization

Research output: Working paper/PreprintPreprint

9 Downloads (Pure)

Abstract

First-order methods are the most popular methods for solving nonsmooth convex optimization problems. A standard approach in the literature is to smooth the objective function up to a priori precision and then deploy an accelerated first-order algorithm. Recently, there has been resurgent interest in using a restarting framework with first-order algorithms. In this paper, we propose a novel adaptive smoothing rule with restarting that enjoys provable convergence guarantees for both deterministic and stochastic nonsmooth convex minimization. We further extend the result to the quasi-Newton method and apply it within the restarting framework, with provable locally convergence for some classes of functions. Numerically, we benchmark the performance of our proposed algorithms against the state-ofthe-art literature across different classes of nonsmooth convex minimization problems, including regression with the ℓ1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), and nuclear norm minimization.
Original languageEnglish
Publication statusSubmitted - 2025

Fingerprint

Dive into the research topics of 'Adaptive Smoothing with Restarting for Stochastic Convex Optimization'. Together they form a unique fingerprint.

Cite this