Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: One of the key challenges of modern AI models is ensuring that they provide
helpful responses to benign queries while refusing malicious ones. But often,
the models are vulnerable to multimodal queries with harmful intent embedded in
images. One approach for safety alignment is training with extensive safety
datasets at the significant costs in both dataset curation and training.
Inference-time alignment mitigates these costs, but introduces two drawbacks:
excessive refusals from misclassified benign queries and slower inference speed
due to iterative output adjustments. To overcome these limitations, we propose
to reformulate queries to strengthen cross-modal attention to safety-critical
image regions, enabling accurate risk assessment at the query level. Using the
assessed risk, it adaptively steers activations to generate responses that are
safe and helpful without overhead from iterative output adjustments. We call
this Risk-adaptive Activation Steering (RAS). Extensive experiments across
multiple benchmarks on multimodal safety and utility demonstrate that the RAS
significantly reduces attack success rates, preserves general task performance,
and improves inference speed over prior inference-time defenses.
Authors (3)
Jonghyun Park
Minhyuk Seo
Jonghyun Choi
Submitted
October 15, 2025
Key Contributions
This paper proposes Risk-adaptive Activation Steering (RAS) to improve safety alignment in multimodal LLMs. RAS reformulates queries to strengthen cross-modal attention to safety-critical image regions, enabling accurate risk assessment at the query level and adaptively steering activations for safe and helpful responses without iterative output adjustments, thus overcoming limitations of existing inference-time alignment methods.
Business Value
Enhances the safety and reliability of AI systems, particularly those interacting with users through both text and images. This can lead to more trustworthy AI assistants, content moderation tools, and secure information retrieval systems, reducing risks associated with harmful or malicious content.