Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The number of tokens it takes to encode parallel text in different languages
is known to vary. These disparities are called token premiums. Having high
token premiums leads to less throughput during training and increases costs at
inference. In this paper, we show that even after controlling for dataset size,
vocabulary size, and data content, monolingual tokenizers exhibit a wide range
of token premiums across languages. To understand the cross-linguistic
differences that cause these token premiums, we train a suite of approximately
7,000 comparable monolingual tokenizers for 97 languages, manipulating
tokenization algorithm, vocabulary size, and dataset size. We measure token
premiums and test for a relationship between factors such as data similarity
(between tokenizer training and evaluation), vocabulary size, and
pre-tokenization. We also investigate the role of language-specific features
such as writing system and word length. We find that similarity between
training and test data does not impact token premiums, but vocabulary size and
pre-tokenization do. While simply increasing vocabulary size does not lead to
reduced token premium effects, we can determine an ``optimal'' vocabulary size
for each language to achieve significantly reduced token premium effects. We
also train superword tokenizers which allow merges over whitespaces, and we
find that they both reduce token premium effects and improve compression
overall. Thus, intervening on the vocabulary size or the pre-tokenizer
significantly reduces crosslingual token premium effects.
Authors (4)
Catherine Arnett
Tyler A. Chang
Stella Biderman
Benjamin K. Bergen
Submitted
October 24, 2025
Key Contributions
This paper explains and mitigates crosslingual tokenizer inequities by analyzing token premiums across 97 languages. It investigates factors like dataset size, vocabulary size, data content, writing system, and word length, finding that monolingual tokenizers exhibit significant disparities, impacting model efficiency and cost.
Business Value
Helps optimize NLP models for global markets by reducing computational costs and improving processing speed for diverse languages, making AI more accessible and affordable.