Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 90% Match Survey Paper AI Security Researchers,Cybersecurity Professionals,AI Developers,Policy Makers 1 week ago

Agentic AI Security: Threats, Defenses, Evaluation, and Open Challenges

ai-safety › alignment
📄 Abstract

Abstract: Agentic AI systems powered by large language models (LLMs) and endowed with planning, tool use, memory, and autonomy, are emerging as powerful, flexible platforms for automation. Their ability to autonomously execute tasks across web, software, and physical environments creates new and amplified security risks, distinct from both traditional AI safety and conventional software security. This survey outlines a taxonomy of threats specific to agentic AI, reviews recent benchmarks and evaluation methodologies, and discusses defense strategies from both technical and governance perspectives. We synthesize current research and highlight open challenges, aiming to support the development of secure-by-design agent systems.
Authors (4)
Shrestha Datta
Shahriar Kabir Nahin
Anshuman Chhabra
Prasant Mohapatra
Submitted
October 27, 2025
arXiv Category
cs.AI
arXiv PDF

Key Contributions

This survey provides a comprehensive overview of the security landscape for agentic AI systems powered by LLMs. It introduces a taxonomy of novel threats, reviews existing evaluation methodologies, and discusses defense strategies, highlighting the unique security risks posed by autonomous AI agents and outlining open research challenges.

Business Value

Crucial for organizations developing or deploying agentic AI systems, enabling them to proactively identify and mitigate security risks, protect sensitive data, and ensure the safe operation of autonomous agents.