Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Human-AI collaboration increasingly drives decision-making across industries,
from medical diagnosis to content moderation. While AI systems promise
efficiency gains by providing automated suggestions for human review, these
workflows can trigger cognitive biases that degrade performance. We know little
about the psychological factors that determine when these collaborations
succeed or fail. We conducted a randomized experiment with 2,784 participants
to examine how task design and individual characteristics shape human responses
to AI-generated suggestions. Using a controlled annotation task, we manipulated
three factors: AI suggestion quality in the first three instances, task burden
through required corrections, and performance-based financial incentives. We
collected demographics, attitudes toward AI, and behavioral data to assess four
performance metrics: accuracy, correction activity, overcorrection, and
undercorrection. Two patterns emerged that challenge conventional assumptions
about human-AI collaboration. First, requiring corrections for flagged AI
errors reduced engagement and increased the tendency to accept incorrect
suggestions, demonstrating how cognitive shortcuts influence collaborative
outcomes. Second, individual attitudes toward AI emerged as the strongest
predictor of performance, surpassing demographic factors. Participants
skeptical of AI detected errors more reliably and achieved higher accuracy,
while those favorable toward automation exhibited dangerous overreliance on
algorithmic suggestions. The findings reveal that successful human-AI
collaboration depends not only on algorithmic performance but also on who
reviews AI outputs and how review processes are structured. Effective human-AI
collaborations require consideration of human psychology: selecting diverse
evaluator samples, measuring attitudes, and designing workflows that counteract
cognitive biases.