Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 97% Match Research Paper AI Researchers,LLM Developers,ML Engineers 17 hours ago

Can LLMs subtract numbers?

large-language-models › reasoning
📄 Abstract

Abstract: We present a systematic study of subtraction in large language models (LLMs). While prior benchmarks emphasize addition and multiplication, subtraction has received comparatively little attention despite being structurally distinct as a non-commutative operation. We evaluate eight pretrained LLMs spanning four families on addition and subtraction problems. Our experiments reveal that subtraction accuracy lags behind addition by a wide margin. We find that the errors for ($a-b$) are concentrated in cases where ($a

Key Contributions

This paper provides a systematic study of subtraction capabilities in LLMs, revealing a significant performance gap compared to addition. It identifies common error patterns, such as omitting negative signs, and investigates the effectiveness of few-shot learning and instruction tuning in improving subtraction accuracy.

Business Value

Highlights critical limitations in current LLMs for tasks requiring precise arithmetic, guiding development towards more robust and reliable models for applications involving calculations.