Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Large language models (LLMs) have shown potential in supporting
decision-making applications, particularly as personal assistants in the
financial, healthcare, and legal domains. While prompt engineering strategies
have enhanced the capabilities of LLMs in decision-making, cognitive biases
inherent to LLMs present significant challenges. Cognitive biases are
systematic patterns of deviation from norms or rationality in decision-making
that can lead to the production of inaccurate outputs. Existing cognitive bias
mitigation strategies assume that input prompts only contain one type of
cognitive bias, limiting their effectiveness in more challenging scenarios
involving multiple cognitive biases. To fill this gap, we propose a cognitive
debiasing approach, self-adaptive cognitive debiasing (SACD), that enhances the
reliability of LLMs by iteratively refining prompts. Our method follows three
sequential steps - bias determination, bias analysis, and cognitive debiasing -
to iteratively mitigate potential cognitive biases in prompts. We evaluate SACD
on finance, healthcare, and legal decision-making tasks using both open-weight
and closed-weight LLMs. Compared to advanced prompt engineering methods and
existing cognitive debiasing techniques, SACD achieves the lowest average bias
scores in both single-bias and multi-bias settings.
Authors (7)
Yougang Lyu
Shijie Ren
Yue Feng
Zihan Wang
Zhumin Chen
Zhaochun Ren
+1 more
Key Contributions
Proposes Self-Adaptive Cognitive Debiasing (SACD), a novel approach to enhance LLM reliability in decision-making by iteratively refining prompts to mitigate multiple cognitive biases. SACD involves bias determination, analysis, and debiasing, addressing the limitations of existing methods that handle only single bias types.
Business Value
Increases the trustworthiness and accuracy of LLM-powered decision support systems in critical domains like finance, healthcare, and law, reducing risks associated with biased AI outputs.