Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recent advances in spoken language processing have led to substantial
progress in phonetic tasks such as automatic speech recognition (ASR), phone
recognition (PR), grapheme-to-phoneme conversion (G2P), and phoneme-to-grapheme
conversion (P2G). Despite their conceptual similarity, these tasks have largely
been studied in isolation, each relying on task-specific architectures and
datasets. In this paper, we introduce POWSM (Phonetic Open Whisper-style Speech
Model), the first unified framework capable of jointly performing multiple
phone-related tasks. POWSM enables seamless conversion between audio, text
(graphemes), and phones, opening up new possibilities for universal and
low-resource speech processing. Our model outperforms or matches specialized PR
models of similar size (Wav2Vec2Phoneme and ZIPA) while jointly supporting G2P,
P2G, and ASR. Our training data, code and models are released to foster open
science.
Authors (8)
Chin-Jou Li
Kalvin Chang
Shikhar Bharadwaj
Eunjung Yeo
Kwanghee Choi
Jian Zhu
+2 more
Submitted
October 28, 2025
Key Contributions
Introduces POWSM, the first unified framework for jointly performing multiple phone-related speech tasks (ASR, PR, G2P, P2G). This model enables seamless conversion between audio, text, and phones, significantly advancing universal and low-resource speech processing capabilities.
Business Value
Enables more efficient and versatile speech technology development, particularly for under-resourced languages, by reducing the need for separate models for different phonetic tasks.