Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The rapid trend of deploying artificial intelligence (AI) and machine
learning (ML) systems in socially consequential domains has raised growing
concerns about their trustworthiness, including potential discriminatory
behaviours. Research in algorithmic fairness has generated a proliferation of
mathematical definitions and metrics, yet persistent misconceptions and
limitations -- both within and beyond the fairness community -- limit their
effectiveness, such as an unreached consensus on its understanding, prevailing
measures primarily tailored to binary group settings, and superficial handling
for intersectional contexts. Here we critically remark on these misconceptions
and argue that fairness cannot be reduced to purely technical constraints on
models; we also examine the limitations of existing fairness measures through
conceptual analysis and empirical illustrations, showing their limited
applicability in the face of complex real-world scenarios, challenging
prevailing views on the incompatibility between accuracy and fairness as well
as that among fairness measures themselves, and outlining three
worth-considering principles in the design of fairness measures. We believe
these findings will help bridge the gap between technical formalisation and
social realities and meet the challenges of real-world AI/ML deployment.