EdgeNets: Edge Varying Graph Neural Networks

Elvin Isufi, Fernando Gama, Alejandro Ribeiro

Research output: Contribution to journalArticleScientificpeer-review

1 Downloads (Pure)


Driven by the outstanding performance of neural networks in the structured Euclidean domain, recent years have seen an interest in developing neural networks for graphs and data supported on graphs. The graph is leveraged at each layer of the neural network as a parameterization to capture detail at the node level with a reduced number of parameters and computational complexity. Following this rationale, this paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet. An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors. By extrapolating this strategy to more iterations between neighboring nodes, the EdgeNet learns edge- and neighbor-dependent weights to capture local detail. This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs). In writing different GNN architectures with a common language, EdgeNets highlight specific architecture advantages and limitations, while providing guidelines to improve their capacity without compromising their local implementation. For instance, we show that GCNNs have a parameter sharing structure that induces permutation equivariance.
Original languageEnglish
Article number9536420
Number of pages18
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Publication statusE-pub ahead of print - 2021


  • Edge varying
  • graph neural networks
  • graph signal processing
  • graph filters
  • learning on graphs


Dive into the research topics of 'EdgeNets: Edge Varying Graph Neural Networks'. Together they form a unique fingerprint.

Cite this