Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 94% Match Research Paper ML Engineers,LLM Developers,Researchers 2 weeks ago

Back to Bytes: Revisiting Tokenization Through UTF-8

large-language-models › model-architecture
📄 Abstract

Abstract: We present UTF8Tokenizer, a minimalist byte-level tokenizer that maps text exactly to IDs corresponding to the bytes underlying the text's UTF-8 encoding (e.g., byte x09 is token ID 9). Unlike prior byte-level approaches (Xue et al., 2021; Pagnoni et al., 2025), our implementation never introduces out-of-range IDs (i.e. there is no token ID 256) or auxiliary tokens: all special behavior (e.g., padding, boundaries, conversation structure, attention segments, tool calling, "thinking" spans, etc.) is encoded using C0 control bytes - just as ASCII was originally designed to embed control information alongside printable text. These design principles yield practical benefits: (1) faster tokenization (14x) and significantly lower host-device transfer (8x less than int64); (2) simple, shareable 256*d embedding tables that can be aligned across models; and (3) a training-time enhancement via bit-biased embeddings, which exposes per-byte bit structure and can be added to the embedding table post-training, removing inference costs. Our HuggingFace-compatible implementation improves language modeling convergence.
Authors (4)
Amit Moryossef
Clara Meister
Pavel Stepachev
Desmond Elliott
Submitted
October 19, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Introduces UTF8Tokenizer, a minimalist byte-level tokenizer using UTF-8 encoding and control bytes for special tokens, leading to faster tokenization (14x), reduced data transfer (8x), simpler embedding tables, and training enhancements via bit-biased embeddings. This approach avoids out-of-range IDs and auxiliary tokens.

Business Value

Improves the efficiency and reduces the computational overhead of LLM processing, enabling faster training and inference, and potentially lowering hardware requirements.