Convergence of Expectation-Maximization Algorithm with Mixed-Integer Optimization

Research output: Contribution to journalArticleScientificpeer-review


The convergence of expectation-maximization (EM)-based algorithms typically requires continuity of the likelihood function with respect to all the unknown parameters (optimization variables). The requirement is not met when parameters comprise both discrete and continuous variables, making the convergence analysis nontrivial. This paper introduces a set of conditions that ensure the convergence of a specific class of EM algorithms that estimate a mixture of discrete and continuous parameters. Our results offer a new analysis technique for iterative algorithms that solve mixed-integer non-linear optimization problems. As a concrete example, we prove the convergence of an existing EM-based sparse Bayesian learning algorithm that estimates the state of a linear dynamical system with jointly sparse inputs and bursty missing observations. Our results establish that the algorithm converges to the set of stationary points of the maximum likelihood cost with respect to the continuous optimization variables.
Original languageEnglish
Pages (from-to)1229-1233
Number of pages5
JournalIEEE Signal Processing Letters
Publication statusPublished - 2024

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.


  • Discrete non-linear optimization
  • global convergence theorem
  • sparse Bayesian learning
  • bursty missing data


Dive into the research topics of 'Convergence of Expectation-Maximization Algorithm with Mixed-Integer Optimization'. Together they form a unique fingerprint.

Cite this