Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper provides a systematic study of subtraction capabilities in LLMs, revealing a significant performance gap compared to addition. It identifies common error patterns, such as omitting negative signs, and investigates the effectiveness of few-shot learning and instruction tuning in improving subtraction accuracy.
Highlights critical limitations in current LLMs for tasks requiring precise arithmetic, guiding development towards more robust and reliable models for applications involving calculations.