Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 93% Match Research Paper Web Developers,AI Engineers,Machine Learning Researchers,System Architects 1 week ago

Improving LLM Reasoning via Dependency-Aware Query Decomposition and Logic-Parallel Content Expansion

large-language-models › reasoning
📄 Abstract

Abstract: The integration of Large Language Models (LLMs) into real-time Web applications, such as AI-powered search and conversational agents, presents a fundamental Web infrastructure challenge: reconciling the demand for high-quality, complex reasoning with the stringent low-latency and high-throughput requirements of interactive services. Current LLM reasoning, hindered by computationally inefficient sequential generation and rigid reasoning strategies, creates a critical bottleneck for the Web services. Existing approaches typically optimize the LLM reasoning for either efficiency or quality but struggle to achieve both, and thus fail to meet the dual requirements of modern Web platforms. To overcome these limitations, we propose Orion, a novel and efficient reasoning framework that enables dependency-aware query decomposition and logic-parallel content expansion. Concretely, Orion decomposes a single query reasoning process into two synergistic phases: (1) \textit{key point generation}, which distills logically structured key points through retrieval-augmented few-shot prompting, and (2) \textit{content parallel expansion}, which concurrently elaborates on these points based on a dependency graph to ensure logical consistency. Furthermore, Orion introduces a pipeline scheduling mechanism that exploits the complementary computational characteristics of the two phases (generation imposes pressure on GPU computing and expansion stresses on GPU memory) across multiple queries, enabling cross-query parallelism and dramatically improving reasoning performance (\ie, efficiency and quality). Experiments on diverse benchmarks show that Orion not only delivers up to 4.33x higher token generation speed and 3.42x lower answer latency over the baselines but also improves reasoning quality by up to 18.75% through explicitly modeling inter-point dependencies.
Authors (4)
Xianjun Gao
Jianchun Liu
Hongli Xu
Liusheng Huang
Submitted
October 28, 2025
arXiv Category
cs.AI
arXiv PDF

Key Contributions

This paper proposes 'Orion,' a novel reasoning framework designed to overcome the latency and throughput bottlenecks of LLMs in real-time web applications. Orion employs dependency-aware query decomposition and logic-parallel content expansion, enabling high-quality, complex reasoning while meeting stringent performance requirements. This approach addresses the trade-off between reasoning efficiency and quality that limits current LLM applications on the web.

Business Value

Enables the development of more responsive and sophisticated AI-powered web services, improving user experience and enabling new applications in search, chatbots, and real-time analysis.