Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Research Paper AI researchers,Developers of generative models,AI ethicists,Policy makers 2 weeks ago

Exposing Blindspots: Cultural Bias Evaluation in Generative Image Models

computer-vision › scene-understanding
📄 Abstract

Abstract: Generative image models produce striking visuals yet often misrepresent culture. Prior work has examined cultural bias mainly in text-to-image (T2I) systems, leaving image-to-image (I2I) editors underexplored. We bridge this gap with a unified evaluation across six countries, an 8-category/36-subcategory schema, and era-aware prompts, auditing both T2I generation and I2I editing under a standardized protocol that yields comparable diagnostics. Using open models with fixed settings, we derive cross-country, cross-era, and cross-category evaluations. Our framework combines standard automatic metrics, a culture-aware retrieval-augmented VQA, and expert human judgments collected from native reviewers. To enable reproducibility, we release the complete image corpus, prompts, and configurations. Our study reveals three findings: (1) under country-agnostic prompts, models default to Global-North, modern-leaning depictions that flatten cross-country distinctions; (2) iterative I2I editing erodes cultural fidelity even when conventional metrics remain flat or improve; and (3) I2I models apply superficial cues (palette shifts, generic props) rather than era-consistent, context-aware changes, often retaining source identity for Global-South targets. These results highlight that culture-sensitive edits remain unreliable in current systems. By releasing standardized data, prompts, and human evaluation protocols, we provide a reproducible, culture-centered benchmark for diagnosing and tracking cultural bias in generative image models.
Authors (11)
Huichan Seo
Sieun Choi
Minki Hong
Yi Zhou
Junseo Kim
Lukman Ismaila
+5 more
Submitted
October 22, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Introduces a unified evaluation framework to assess cultural bias in both text-to-image (T2I) and image-to-image (I2I) generative models across six countries and different eras. The framework combines automatic metrics, a culture-aware VQA system, and expert human judgments, revealing that models often default to Global-North, modern depictions.

Business Value

Helps developers and users understand and mitigate cultural biases in generative AI, leading to more equitable and representative AI-generated content, crucial for global brand consistency and ethical AI deployment.