Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper introduces an interactive agent that generates verifiable explanations for AI models in high-stakes domains by strategically seeking external visual evidence. This approach, optimized via reinforcement learning, significantly improves calibrated accuracy and provides a causal intervention method to validate the faithfulness of the agent's reasoning, addressing the critical need for trust and auditability in AI decision-making.
Enhances trust and reliability in AI systems used in critical applications like healthcare, leading to better decision-making and potentially reducing errors and liability.