Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The rapid growth of artificial intelligence (AI) technologies has raised
major privacy and ethical concerns. However, existing AI incident taxonomies
and guidelines lack grounding in real-world cases, limiting their effectiveness
for prevention and mitigation. We analyzed 202 real-world AI privacy and
ethical incidents to develop a taxonomy that classifies them across AI
lifecycle stages and captures contributing factors, including causes,
responsible entities, sources of disclosure, and impacts. Our findings reveal
widespread harms from poor organizational decisions and legal non-compliance,
limited corrective interventions, and rare reporting from AI developers and
adopting entities. Our taxonomy offers a structured approach for systematic
incident reporting and emphasizes the weaknesses of current AI governance
frameworks. Our findings provide actionable guidance for policymakers and
practitioners to strengthen user protections, develop targeted AI policies,
enhance reporting practices, and foster responsible AI governance and
innovation, especially in contexts such as social media and child protection.