Stochastic graph neural networks

Zhan Gao*, Elvin Isufi, Alejandro Ribeiro

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

4 Citations (Scopus)

Abstract

Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning among others. Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks. In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly. To overcome this issue, we put forth the stochastic graph neural network (SGNN) model: a GNN where the distributed graph convolution module accounts for the random network changes. Since stochasticity brings in a new learning paradigm, we conduct a statistical analysis on the SGNN output variance to identify conditions the learned filters should satisfy for achieving robust transference to perturbed scenarios, ultimately revealing the explicit impact of random link losses. We further develop a stochastic gradient descent (SGD) based learning process for the SGNN and derive conditions on the learning rate under which this learning process converges to a stationary point. Numerical results corroborate our theoretical findings and compare the benefits of SGNN robust transference with a conventional GNN that ignores graph perturbations during learning.

Original languageEnglish
Article number9466444
Pages (from-to)4428-4443
Number of pages16
JournalIEEE Transactions on Signal Processing
Volume69
DOIs
Publication statusPublished - 2021

Keywords

  • Distributed learning
  • Graph filters
  • Graph neural networks

Fingerprint

Dive into the research topics of 'Stochastic graph neural networks'. Together they form a unique fingerprint.

Cite this