Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
π Abstract
Abstract: We present a compact, single-model approach to multilingual inflection, the
task of generating inflected word forms from base lemmas to express grammatical
categories. Our model, trained jointly on data from 73 languages, is
lightweight, robust to unseen words, and outperforms monolingual baselines in
most languages. This demonstrates the effectiveness of multilingual modeling
for inflection and highlights its practical benefits: simplifying deployment by
eliminating the need to manage and retrain dozens of separate monolingual
models. In addition to the standard SIGMORPHON shared task benchmarks, we
evaluate our monolingual and multilingual models on 73 Universal Dependencies
(UD) treebanks, extracting lemma-tag-form triples and their frequency counts.
To ensure realistic data splits, we introduce a novel frequency-weighted,
lemma-disjoint train-dev-test resampling procedure. Our work addresses the lack
of an open-source, general-purpose, multilingual morphological inflection
system capable of handling unseen words across a wide range of languages,
including Czech. All code is publicly released at:
https://github.com/tomsouri/multilingual-inflection.
Authors (2)
TomΓ‘Ε‘ Sourada
Jana StrakovΓ‘
Submitted
October 27, 2025
Text, Speech, and Dialogue. TSD 2025. Lecture Notes in Computer
Science, vol 16030. Springer, Cham, pp 39-50
Key Contributions
Introduces a single, compact model for multilingual inflection across 73 languages that outperforms monolingual baselines and is robust to unseen words. This simplifies deployment by eliminating the need for numerous separate models.
Business Value
Significantly reduces the operational overhead and complexity of deploying NLP solutions for morphologically rich languages, enabling broader language support with less effort.