Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 96% Match Research Paper ASR researchers,Developers of speech technology,AI safety researchers,Professionals in healthcare and law using transcription 2 weeks ago

Hallucination Benchmark for Speech Foundation Models

speech-audio › speech-recognition
📄 Abstract

Abstract: Hallucinations in automatic speech recognition (ASR) systems refer to fluent and coherent transcriptions produced by neural ASR models that are completely unrelated to the underlying acoustic input (i.e., the speech signal). While similar to conventional decoding errors in potentially compromising the usability of transcriptions for downstream applications, hallucinations can be more detrimental due to their preservation of syntactically and semantically plausible structure. This apparent coherence can mislead subsequent processing stages and introduce serious risks, particularly in critical domains such as healthcare and law. Conventional evaluation metrics are primarily centered on error-based metrics and fail to distinguish between phonetic inaccuracies and hallucinations. Consequently, there is a critical need for new evaluation frameworks that can effectively identify and assess models with a heightened propensity for generating hallucinated content. To this end, we introduce SHALLOW, the first benchmark framework that systematically categorizes and quantifies hallucination phenomena in ASR along four complementary axes: lexical, phonetic, morphological, and semantic. We define targeted metrics within each category to produce interpretable profiles of model behavior. Through evaluation across various architectures and speech domains, we have found that SHALLOW metrics correlate strongly with word error rate (WER) when recognition quality is high (i.e., low WER). Still, this correlation weakens substantially as WER increases. SHALLOW, therefore, captures fine-grained error patterns that WER fails to distinguish under degraded and challenging conditions. Our framework supports specific diagnosis of model weaknesses and provides feedback for model improvement beyond what aggregate error rates can offer.
Authors (5)
Alkis Koudounas
Moreno La Quatra
Manuel Giollo
Sabato Marco Siniscalchi
Elena Baralis
Submitted
October 18, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Highlights the critical problem of hallucinations in ASR systems, which produce fluent but unrelated transcriptions, and argues that conventional error-based metrics are insufficient for detecting them. The paper calls for and proposes new evaluation frameworks to specifically identify and assess models with a propensity for generating hallucinated content, crucial for high-stakes domains.

Business Value

Improves the reliability and safety of speech recognition systems, particularly in sensitive sectors like healthcare and law, reducing the risk of misinterpretations and errors that could have severe consequences.