Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Survey Paper Arabic NLP Researchers,AI Researchers,Developers of Arabic Language Technologies,Benchmark Designers 2 weeks ago

Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps

large-language-models › evaluation
📄 Abstract

Abstract: This survey provides the first systematic review of Arabic LLM benchmarks, analyzing 40+ evaluation benchmarks across NLP tasks, knowledge domains, cultural understanding, and specialized capabilities. We propose a taxonomy organizing benchmarks into four categories: Knowledge, NLP Tasks, Culture and Dialects, and Target-Specific evaluations. Our analysis reveals significant progress in benchmark diversity while identifying critical gaps: limited temporal evaluation, insufficient multi-turn dialogue assessment, and cultural misalignment in translated datasets. We examine three primary approaches: native collection, translation, and synthetic generation discussing their trade-offs regarding authenticity, scale, and cost. This work serves as a comprehensive reference for Arabic NLP researchers, providing insights into benchmark methodologies, reproducibility standards, and evaluation metrics while offering recommendations for future development.

Key Contributions

This survey provides the first systematic review of 40+ Arabic LLM evaluation benchmarks, proposing a taxonomy and analyzing benchmark creation methods. It identifies critical gaps such as limited temporal evaluation, insufficient multi-turn dialogue assessment, and cultural misalignment, offering recommendations for future development and serving as a reference for Arabic NLP researchers.

Business Value

Facilitates the development and evaluation of more accurate and culturally relevant Arabic NLP systems, opening up new markets and applications for AI in Arabic-speaking regions.