A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

Andrea Simonetto, Aryan Mokhtari, Alec Koppel, Geert Leus, Alejandro Ribeiro

Research output: Contribution to journalArticleScientificpeer-review

80 Citations (Scopus)


This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the sampling period. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O({h}^{2})$, and in some cases as $O({h}^{4})$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

Original languageEnglish
Article number7469393
Pages (from-to)4576-4591
Number of pages16
JournalIEEE Transactions on Signal Processing
Issue number17
Publication statusPublished - 2016


  • non-stationary optimization
  • parametric programming
  • prediction-correction methods
  • Time-varying optimization


Dive into the research topics of 'A Class of Prediction-Correction Methods for Time-Varying Convex Optimization'. Together they form a unique fingerprint.

Cite this