Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 90% Match Research Paper AI researchers,Speech technologists,NLP engineers,Developers of voice assistants 2 weeks ago

Slot Filling as a Reasoning Task for SpeechLLMs

large-language-models › reasoning
📄 Abstract

Abstract: We propose integration of reasoning into speech large language models (speechLLMs) for the end-to-end slot-filling task. Inspired by the recent development of reasoning LLMs, we use a chain-of-thought framework to decompose the slot-filling task into multiple reasoning steps, create a reasoning dataset and apply the supervised fine-tuning strategy to a speechLLM. We distinguish between regular and reasoning speechLLMs and experiment with different types and sizes of LLMs as their text foundation models. We demonstrate performance improvements by introducing reasoning (intermediate) steps. However, we show that a reasoning textual LLM developed mainly for math, logic and coding domains might be inferior as a foundation model for a reasoning speechLLM. We further show that hybrid speechLLMs, built on a hybrid text foundation LLM and fine-tuned to preserve both direct and reasoning modes of operation, have better performance than those fine-tuned employing only one mode of operation.
Authors (3)
Kadri Hacioglu
Manjunath K E
Andreas Stolcke
Submitted
October 22, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Proposes integrating chain-of-thought reasoning into speech LLMs for the slot-filling task, demonstrating performance improvements by decomposing the task into reasoning steps. It also investigates the impact of different text foundation models and introduces hybrid speechLLMs for better performance.

Business Value

Enables more sophisticated and accurate voice assistants and conversational agents capable of complex information extraction and task completion, improving user experience in applications like customer service and personal assistants.