Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 90% Match Research Paper Graph ML Researchers,Geometric Deep Learning Experts,Transfer Learning Specialists 3 weeks ago

GraphShaper: Geometry-aware Alignment for Improving Transfer Learning in Text-Attributed Graphs

graph-neural-networks › graph-learning
📄 Abstract

Abstract: Graph foundation models represent a transformative paradigm for learning transferable representations across diverse graph domains. Recent methods leverage large language models to unify graph and text modalities into a shared representation space using contrastive learning. However, systematic evaluations reveal significant performance degradation at structural boundaries where distinct topological patterns converge, with accuracy losses exceeding 20 percentage points. This issue arises from a key limitation: current methods assume all graph structures can be encoded within a single Euclidean space. In reality, tree structures require hyperbolic geometry to preserve hierarchical branching, while cyclic patterns depend on spherical geometry for closure properties. At structural boundaries, nodes experience conflicting geometric constraints that uniform encoding spaces cannot resolve. This raises a crucial challenge: \textbf{Can alignment frameworks be designed to respect the intrinsic geometric diversity of graph structures?} We introduce \textbf{GraphShaper}, a geometry-aware framework that enhances graph encoding through multi-geometric specialization. Our approach employs expert networks tailored to different geometric spaces, dynamically computing fusion weights to adaptively integrate geometric properties based on local structural characteristics. This adaptive fusion preserves structural integrity before alignment with text embeddings. Extensive experiments demonstrate that GraphShaper achieves 9.47\% accuracy improvements on citation networks and 7.63\% on social networks in zero-shot settings.
Authors (9)
Heng Zhang
Tianyi Zhang
Yuling Shi
Xiaodong Gu
Yaomin Shen
Haochen You
+3 more
Submitted
October 14, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper identifies a key limitation in current graph foundation models: the assumption of a single Euclidean space for encoding, which degrades performance at structural boundaries where different geometries (e.g., hyperbolic for trees, spherical for cycles) are needed. It proposes that alignment frameworks must respect these conflicting geometric constraints to improve transfer learning in text-attributed graphs.

Business Value

Enabling more robust and accurate transfer learning on diverse graph data (e.g., social networks, molecular structures) can unlock new insights and applications in various industries.