by
Alejandro Ribeiro and Fernando Gama
University of Pennsylvania

Neural Networks have achieved resounding success in a variety of learning tasks. Although sometimes overlooked, success has not been uniform across all learning problems and it has not been achieved by generic architectures. Most remarkable accomplishments are on the processing of signals in time and images and have been attained by Convolutional Neural Networks (CNNs). This is because convolutions successfully exploit the regular structure of Euclidean space and enable learning in high dimensional spaces.

In this tutorial we will develop the novel concept of Graph Neural Networks (GNNs), which intend to extend the success of CNNs to the processing of high dimensional signals in non-Euclidean domains. They do so by leveraging possibly irregular signal structures described by graphs. The following topics will be covered:

  1. Graph Convolutions and GNN Architectures. The key concept enabling the definition of GNNs is the graph convolutional filter introduced in the graph signal processing (GSP) literature. GNN architectures compose graph filters with pointwise nonlinearities. Illustrative examples on authorship attribution and recommendation systems will be covered.
  2. Fundamental Properties of GNNs. Graph filters and GNNs are suitable architectures to process signals on graphs because of their permutation equivariance. GNNs tend to work better than graph filters because they are Lipschitz stable to deformations of the graph that describes their structure. This is a property that regular graph filters can’t have.
  3. Distributed Control of Multiagent Systems. An exciting application domain for GNNs is the distributed control of large scale multiagent systems. Applications to the control of robot swarms and wireless communication networks will be covered.

Attendees to this tutorial will be prepared to tackle research on the practice and theory of GNNs. Coding examples will be provided throughout.

%d bloggers like this: