Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 98% Match Research Paper AI safety researchers,LLM developers,AI ethicists,Policy makers,Auditors of AI systems 2 days ago

Characterizing Selective Refusal Bias in Large Language Models

ai-safety › fairness
📄 Abstract

Abstract: Safety guardrails in large language models(LLMs) are developed to prevent malicious users from generating toxic content at a large scale. However, these measures can inadvertently introduce or reflect new biases, as LLMs may refuse to generate harmful content targeting some demographic groups and not others. We explore this selective refusal bias in LLM guardrails through the lens of refusal rates of targeted individual and intersectional demographic groups, types of LLM responses, and length of generated refusals. Our results show evidence of selective refusal bias across gender, sexual orientation, nationality, and religion attributes. This leads us to investigate additional safety implications via an indirect attack, where we target previously refused groups. Our findings emphasize the need for more equitable and robust performance in safety guardrails across demographic groups.
Authors (2)
Adel Khorramrouz
Sharon Levy
Submitted
October 31, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper characterizes selective refusal bias in LLM safety guardrails, demonstrating that LLMs may refuse harmful content for some demographic groups but not others. The study quantifies this bias across various attributes and highlights the need for more equitable and robust safety measures to prevent unintended discrimination.

Business Value

Ensuring fairness and equity in AI systems is crucial for building trust and avoiding reputational damage and legal liabilities. Addressing bias in LLMs is essential for responsible AI deployment.