Large-scale graph machine learning: tradeoffs, guarantees and dynamics
Luana Ruiz, Assistant Professor, Department of Applied Mathematics and Statistics, Johns Hopkins University-
The University of Texas at Austin
Abstract: Graph neural networks (GNNs) are successful at learning representations from most types of network data but suffer from limitations in large graphs, which do not have the Euclidean structure that time and image signals have in the limit. Yet, large graphs can often be identified as being similar to each other in the sense that they share structural properties. We focus on graph families identified by a common graph limit -- the graphon. A graphon is a bounded symmetric kernel which can be interpreted both as the limit of convergent graph sequences and as a generative model for random graphs. Graphs sampled from a graphon almost surely share structural properties in the limit, therefore, we can expect processing data on a collection of graphs associated with the same graphon to yield similar results. In this talk, I formalize this intuition by analyzing the convergence of GNNs and related neural tangent kernels to their respective graphon limits. While convergence is not surprising, it has three interesting implications. First, it enables machine learning on large-scale graphs via transferability, but at the cost of a tradeoff with respect to the GNN discriminability. Second, it guarantees that we can reach the optimal graphon NN by solving empirical risk minimization on convergent sequences of graphs. Third, it allows a more efficient study of the learning dynamics of GNNs on large-scale graphs.
Bio: Luana Ruiz is an Assistant Professor with the Department of Applied Mathematics and Statistics at Johns Hopkins University. She received the Ph.D. degree in electrical engineering from the University of Pennsylvania in 2022, and the M.Eng. and B.Eng. double degree in electrical engineering from the École Supérieure d'Electricité, France, and the University of São Paulo, Brazil, in 2017. Luana's work focuses on large-scale graph information processing and graph neural network architectures. She was awarded an Eiffel Excellence scholarship from the French Ministry for Europe and Foreign Affairs between 2013 and 2015; nominated an iREDEFINE fellow in 2019, a MIT EECS Rising Star in 2021, a Simons Research Fellow in 2022, and a METEOR fellow in 2023; and received best student paper awards at the 27th and 29th European Signal Processing Conferences. She currently serves as a member of the MLSP TC.