Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 90% Match Research Paper AI Researchers,Graph Analysts,Data Scientists,LLM Developers 20 hours ago

GraphChain: Large Language Models for Large-scale Graph Analysis via Tool Chaining

large-language-models › reasoning
📄 Abstract

Abstract: Large Language Models (LLMs) face significant limitations when applied to large-scale graphs, struggling with context constraints and inflexible reasoning. We present GraphChain, a framework that enables LLMs to analyze complex graphs through dynamic sequences of specialized tools, mimicking human exploratory intelligence. Our approach introduces two key innovations: (1) Progressive Graph Distillation, a reinforcement learning mechanism that generates optimized tool sequences balancing task relevance with information compression, and (2) Structure-aware Test-Time Adaptation, which efficiently tailors tool selection strategies to diverse graph topologies using spectral properties and lightweight adapters without costly retraining. Experiments show GraphChain significantly outperforms prior methods, enabling scalable and adaptive LLM-driven graph analysis.

Key Contributions

Introduces GraphChain, a framework enabling LLMs to analyze large-scale graphs via tool chaining, overcoming context limitations and reasoning inflexibility. Key innovations include Progressive Graph Distillation (RL for optimized tool sequences) and Structure-aware Test-Time Adaptation (tailoring tools using spectral properties without retraining).

Business Value

Enables more powerful and scalable analysis of complex network data (e.g., social networks, knowledge graphs) using LLMs, leading to better insights and decision-making in areas like fraud detection, recommendation systems, and scientific discovery.