Abstract
For swarm systems, distributed processing is of paramount importance, and Bayesian methods are preferred for their robustness. Existing distributed sparse Bayesian learn- ing (SBL) methods rely on the automatic relevance deter- mination (ARD), which involves a computationally complex reweighted l1-norm optimization, or they use loopy belief propagation, which is not guaranteed to converge. Hence, this paper looks into the fast marginal likelihood maximiza- tion (FMLM) method to develop a faster distributed SBL version. The proposed method has a low communication overhead, and can be distributed by simple consensus meth- ods. The performed simulations indicate a better performance compared with the distributed ARD version, yet the same per- formance as the FMLM.
Original language | English |
---|---|
Article number | 9264682 |
Pages (from-to) | 2119-2123 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 27 |
DOIs | |
Publication status | Published - 2020 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-careOtherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Keywords
- Consensus Algorithms
- Distributed Optimization
- Sparse Bayesian Learning