Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper investigates factors influencing explanation faithfulness in LLMs, particularly for healthcare applications. It finds that few-shot example quality/quantity and prompting design significantly impact faithfulness, while instruction tuning improves it on medical tasks, offering practical insights for deployment.
Increases trust and safety in AI-driven decision support systems, especially in healthcare, by ensuring explanations are reliable and clinicians can understand the basis of AI recommendations.