Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Text-attributed graphs (TAGs) present unique challenges in representation
learning by requiring models to capture both the semantic richness of
node-associated texts and the structural dependencies of the graph. While graph
neural networks (GNNs) excel at modeling topological information, they lack the
capacity to process unstructured text. Conversely, large language models (LLMs)
are proficient in text understanding but are typically unaware of graph
structure. In this work, we propose BiGTex (Bidirectional Graph Text), a novel
architecture that tightly integrates GNNs and LLMs through stacked Graph-Text
Fusion Units. Each unit allows for mutual attention between textual and
structural representations, enabling information to flow in both directions,
text influencing structure and structure guiding textual interpretation. The
proposed architecture is trained using parameter-efficient fine-tuning (LoRA),
keeping the LLM frozen while adapting to task-specific signals. Extensive
experiments on five benchmark datasets demonstrate that BiGTex achieves
state-of-the-art performance in node classification and generalizes effectively
to link prediction. An ablation study further highlights the importance of soft
prompting and bi-directional attention in the model's success.
Authors (2)
Azadeh Beiranvand
Seyed Mehdi Vahidipour
Key Contributions
Proposes BiGTex, a novel architecture that tightly integrates GNNs and LLMs using stacked Graph-Text Fusion Units with bidirectional attention. This allows for mutual influence between textual and structural representations, overcoming limitations of using GNNs or LLMs in isolation for Text-Attributed Graphs. It employs parameter-efficient fine-tuning (LoRA) for effective adaptation.
Business Value
Enables more sophisticated analysis of complex data where entities are linked and described by text, leading to better insights in areas like social network analysis, knowledge graph completion, and content recommendation.