Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
π Abstract
Abstract: Neural networks excel at processing unstructured data but often fail to
generalise out-of-distribution, whereas classical algorithms guarantee
correctness but lack flexibility. We explore whether pretraining Graph Neural
Networks (GNNs) on classical algorithms can improve their performance on
molecular property prediction tasks from the Open Graph Benchmark: ogbg-molhiv
(HIV inhibition) and ogbg-molclintox (clinical toxicity). GNNs trained on 24
classical algorithms from the CLRS Algorithmic Reasoning Benchmark are used to
initialise and freeze selected layers of a second GNN for molecular prediction.
Compared to a randomly initialised baseline, the pretrained models achieve
consistent wins or ties, with the Segments Intersect algorithm pretraining
yielding a 6% absolute gain on ogbg-molhiv and Dijkstra pretraining achieving a
3% gain on ogbg-molclintox. These results demonstrate embedding classical
algorithmic priors into GNNs provides useful inductive biases, boosting
performance on complex, real-world graph data.
Authors (2)
Jason Wu
Petar VeliΔkoviΔ
Submitted
October 24, 2025
Key Contributions
This paper demonstrates that pretraining Graph Neural Networks (GNNs) on classical algorithms (e.g., Segments Intersect, Dijkstra) can significantly improve their performance on molecular property prediction tasks. By embedding algorithmic priors, GNNs gain useful inductive biases, leading to consistent wins over randomly initialized baselines and better out-of-distribution generalization.
Business Value
Accelerates the discovery of new drugs and materials by improving the accuracy and generalization of predictive models used in computational chemistry and related fields.