Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper Software Developers,AI Researchers,ML Engineers,LLM Developers 2 weeks ago

LongCodeBench: Evaluating Coding LLMs at 1M Context Windows

large-language-models › evaluation
📄 Abstract

Abstract: Context lengths for models have grown rapidly, from thousands to millions of tokens in just a few years. The extreme context sizes of modern long-context models have made it difficult to construct realistic long-context benchmarks -- not only due to the cost of collecting million-context tasks but also in identifying realistic scenarios that require significant contexts. We identify code comprehension and repair as a natural testbed and challenge task for long-context models and introduce LongCodeBench (LCB), a benchmark to test LLM coding abilities in long-context scenarios. Our benchmark tests both the comprehension and repair capabilities of LCLMs in realistic and important settings by drawing from real-world GitHub issues and constructing QA (LongCodeQA) and bug fixing (LongSWE-Bench) tasks. We carefully stratify the complexity of our benchmark, enabling us to evaluate models across different scales -- ranging from Qwen2.5 14B Instruct to Google's flagship Gemini model. We find that long-context remains a weakness for all models, with performance drops such as from 29% to 3% for Claude 3.5 Sonnet, or from 70.2% to 40% for Qwen2.5. The LCB dataset is available publicly at https://huggingface.co/datasets/Steefano/LCB and the codebase to replicate the work on this paper at https://github.com/Zteefano/long-code-bench.
Authors (8)
Stefano Rando
Luca Romani
Alessio Sampieri
Luca Franco
John Yang
Yuta Kyuragi
+2 more
Submitted
May 12, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Introduces LongCodeBench (LCB), a benchmark designed to evaluate LLM coding abilities in long-context scenarios (up to 1M tokens). LCB includes realistic code comprehension (LongCodeQA) and bug fixing (LongSWE-Bench) tasks derived from GitHub issues, enabling evaluation of models from 14B to flagship sizes.

Business Value

Helps developers and organizations choose and optimize LLMs for software development tasks, leading to improved code quality, faster debugging, and more efficient development cycles.