Consensus Based Distributed Sparse Bayesian Learning By Fast Marginal Likelihood Maximization

Christoph Manss, Dmitriy Shutin, Geert Leus

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)
38 Downloads (Pure)

Abstract

For swarm systems, distributed processing is of paramount importance, and Bayesian methods are preferred for their robustness. Existing distributed sparse Bayesian learn- ing (SBL) methods rely on the automatic relevance deter- mination (ARD), which involves a computationally complex reweighted l1-norm optimization, or they use loopy belief propagation, which is not guaranteed to converge. Hence, this paper looks into the fast marginal likelihood maximiza- tion (FMLM) method to develop a faster distributed SBL version. The proposed method has a low communication overhead, and can be distributed by simple consensus meth- ods. The performed simulations indicate a better performance compared with the distributed ARD version, yet the same per- formance as the FMLM.

Original languageEnglish
Article number9264682
Pages (from-to)2119-2123
Number of pages5
JournalIEEE Signal Processing Letters
Volume27
DOIs
Publication statusPublished - 2020

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care

Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Consensus Algorithms
  • Distributed Optimization
  • Sparse Bayesian Learning

Fingerprint

Dive into the research topics of 'Consensus Based Distributed Sparse Bayesian Learning By Fast Marginal Likelihood Maximization'. Together they form a unique fingerprint.

Cite this