Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper AI security researchers,LLM developers,Cybersecurity professionals,NLP researchers 17 hours ago

An Automated Framework for Strategy Discovery, Retrieval, and Evolution in LLM Jailbreak Attacks

large-language-models › alignment
📄 Abstract

Abstract: The widespread deployment of Large Language Models (LLMs) as public-facing web services and APIs has made their security a core concern for the web ecosystem. Jailbreak attacks, as one of the significant threats to LLMs, have recently attracted extensive research. In this paper, we reveal a jailbreak strategy which can effectively evade current defense strategies. It can extract valuable information from failed or partially successful attack attempts and contains self-evolution from attack interactions, resulting in sufficient strategy diversity and adaptability. Inspired by continuous learning and modular design principles, we propose ASTRA, a jailbreak framework that autonomously discovers, retrieves, and evolves attack strategies to achieve more efficient and adaptive attacks. To enable this autonomous evolution, we design a closed-loop "attack-evaluate-distill-reuse" core mechanism that not only generates attack prompts but also automatically distills and generalizes reusable attack strategies from every interaction. To systematically accumulate and apply this attack knowledge, we introduce a three-tier strategy library that categorizes strategies into Effective, Promising, and Ineffective based on their performance scores. The strategy library not only provides precise guidance for attack generation but also possesses exceptional extensibility and transferability. We conduct extensive experiments under a black-box setting, and the results show that ASTRA achieves an average Attack Success Rate (ASR) of 82.7%, significantly outperforming baselines.

Key Contributions

ASTRA is a novel framework that autonomously discovers, retrieves, and evolves LLM jailbreak attack strategies. It employs a closed-loop 'attack-evaluate-distill-reuse' mechanism, inspired by continuous learning, to generate diverse and adaptive attacks that can evade current defenses, making LLM security a core concern for the web ecosystem.

Business Value

Enhances the security posture of LLM deployments by providing automated tools for testing and identifying vulnerabilities, crucial for protecting against misuse and ensuring responsible AI development.