Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Text-attributed graphs are widely used across domains, offering rich
opportunities for zero-shot learning via graph-text alignment. However,
existing methods struggle with tasks requiring fine-grained pattern
recognition, particularly on heterophilic graphs. Through empirical and
theoretical analysis, we identify an \textbf{over-abstraction problem}: current
approaches operate at excessively large hyperbolic radii, compressing
multi-scale structural information into uniform high-level abstractions. This
abstraction-induced information loss obscures critical local patterns essential
for accurate predictions. By analyzing embeddings in hyperbolic space, we
demonstrate that optimal graph learning requires \textbf{faithful preservation}
of fine-grained structural details, better retained by representations
positioned closer to the origin. To address this, we propose \textbf{H4G}, a
framework that systematically reduces embedding radii using learnable
block-diagonal scaling matrices and M\"obius matrix multiplication. This
approach restores access to fine-grained patterns while maintaining global
receptive ability with minimal computational overhead. Experiments show H4G
achieves state-of-the-art zero-shot performance with \textbf{12.8\%}
improvement on heterophilic graphs and \textbf{8.4\%} on homophilic graphs,
confirming that radius reduction enables faithful multi-scale representation
for advancing zero-shot graph learning.
Authors (9)
Heng Zhang
Tianyi Zhang
Zijun Liu
Yuling Shi
Yaomin Shen
Haochen You
+3 more
Submitted
October 14, 2025
Key Contributions
This paper addresses the 'over-abstraction problem' in hyperbolic graph learning, where large embedding radii lead to information loss and hinder fine-grained pattern recognition, especially on heterophilic graphs. It proposes H4G, a framework that reduces embedding radii using learnable block-diagonal scaling matrices to achieve 'faithful preservation' of structural details, improving zero-shot learning performance.
Business Value
Enhanced zero-shot learning capabilities on graph data can enable rapid adaptation to new tasks and domains without extensive retraining, valuable for dynamic environments.