Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Graph foundation models have recently attracted significant attention due to
its strong generalizability. Although existing methods resort to language
models to learn unified semantic representations across domains, they disregard
the unique structural characteristics of graphs from different domains. To
address the problem, in this paper, we boost graph foundation model from
structural perspective and propose BooG. The model constructs virtual super
nodes to unify structural characteristics of graph data from different domains.
Specifically, the super nodes fuse the information of anchor nodes and class
labels, where each anchor node captures the information of a node or a graph
instance to be classified. Instead of using the raw graph structure, we connect
super nodes to all nodes within their neighborhood by virtual edges. This new
structure allows for effective information aggregation while unifying
cross-domain structural characteristics. Additionally, we propose a novel
pre-training objective based on contrastive learning, which learns more
expressive representations for graph data and generalizes effectively to
different domains and downstream tasks. Experimental results on various
datasets and tasks demonstrate the superior performance of BooG. We provide our
code and data here: https://anonymous.4open.science/r/BooG-EE42/.
Authors (4)
Yao Cheng
Yige Zhao
Jianxiang Yu
Xiang Li
Key Contributions
Proposes BooG, a graph foundation model that boosts generalizability by focusing on structural unification. It introduces virtual super nodes and edges to capture cross-domain structural characteristics, enabling effective information aggregation and improving performance on diverse graph datasets.
Business Value
Enables more robust and adaptable graph-based AI solutions across various industries, leading to better predictions and insights from complex network data.