Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Survey AI Researchers,LLM Developers,AI Ethicists,Policy Makers 1 week ago

The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs

ai-safety › alignment
📄 Abstract

Abstract: With the rapid advancement of artificial intelligence, Large Language Models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), including content generation, human-computer interaction, machine translation, and code generation. However, their widespread deployment has also raised significant safety concerns. In particular, LLM-generated content can exhibit unsafe behaviors such as toxicity, bias, or misinformation, especially in adversarial contexts, which has attracted increasing attention from both academia and industry. Although numerous studies have attempted to evaluate these risks, a comprehensive and systematic survey on safety evaluation of LLMs is still lacking. This work aims to fill this gap by presenting a structured overview of recent advances in safety evaluation of LLMs. Specifically, we propose a four-dimensional taxonomy: (i) Why to evaluate, which explores the background of safety evaluation of LLMs, how they differ from general LLMs evaluation, and the significance of such evaluation; (ii) What to evaluate, which examines and categorizes existing safety evaluation tasks based on key capabilities, including dimensions such as toxicity, robustness, ethics, bias and fairness, truthfulness, and related aspects; (iii) Where to evaluate, which summarizes the evaluation metrics, datasets and benchmarks currently used in safety evaluations; (iv) How to evaluate, which reviews existing mainstream evaluation methods based on the roles of the evaluators and some evaluation frameworks that integrate the entire evaluation pipeline. Finally, we identify the challenges in safety evaluation of LLMs and propose promising research directions to promote further advancement in this field. We emphasize the necessity of prioritizing safety evaluation to ensure the reliable and responsible deployment of LLMs in real-world applications.
Authors (8)
Songyang Liu
Chaozhuo Li
Jiameng Qiu
Xi Zhang
Feiran Huang
Litian Zhang
+2 more
Submitted
June 6, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper provides a comprehensive and systematic survey on the safety evaluation of Large Language Models (LLMs), addressing the lack of structured overviews in this critical area. It proposes a four-dimensional taxonomy to categorize safety evaluation efforts, offering a foundational framework for future research and development in ensuring LLM safety.

Business Value

Ensuring the safe deployment of LLMs is crucial for businesses to avoid reputational damage, legal liabilities, and user distrust. This survey provides a structured approach to understanding and mitigating these risks, enabling more responsible AI development and deployment.