Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Large Language Models (LLMs) remain vulnerable to jailbreaking attacks where
adversarial prompts elicit harmful outputs, yet most evaluations focus on
single-turn interactions while real-world attacks unfold through adaptive
multi-turn conversations. We present AutoAdv, a training-free framework for
automated multi-turn jailbreaking that achieves up to 95% attack success rate
on Llama-3.1-8B within six turns a 24 percent improvement over single turn
baselines. AutoAdv uniquely combines three adaptive mechanisms: a pattern
manager that learns from successful attacks to enhance future prompts, a
temperature manager that dynamically adjusts sampling parameters based on
failure modes, and a two-phase rewriting strategy that disguises harmful
requests then iteratively refines them. Extensive evaluation across commercial
and open-source models (GPT-4o-mini, Qwen3-235B, Mistral-7B) reveals persistent
vulnerabilities in current safety mechanisms, with multi-turn attacks
consistently outperforming single-turn approaches. These findings demonstrate
that alignment strategies optimized for single-turn interactions fail to
maintain robustness across extended conversations, highlighting an urgent need
for multi-turn-aware defenses.
Key Contributions
AutoAdv is a novel training-free framework for automated multi-turn jailbreaking of LLMs, achieving up to 95% attack success rate on Llama-3.1-8B. It uniquely combines adaptive mechanisms (pattern manager, temperature manager, two-phase rewriting) to learn from attacks and refine prompts iteratively, demonstrating persistent vulnerabilities in current safety mechanisms against sophisticated, multi-turn adversarial interactions.
Business Value
Understanding and mitigating LLM vulnerabilities is crucial for safe deployment in public-facing applications, preventing misuse for generating misinformation, harmful content, or facilitating malicious activities.