Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper AI Security Researchers,OS Developers,Cybersecurity Professionals,AI Safety Engineers,Vulnerability Researchers 20 hours ago

MIP against Agent: Malicious Image Patches Hijacking Multimodal OS Agents

ai-safety › robustness
📄 Abstract

Abstract: Recent advances in operating system (OS) agents have enabled vision-language models (VLMs) to directly control a user's computer. Unlike conventional VLMs that passively output text, OS agents autonomously perform computer-based tasks in response to a single user prompt. OS agents do so by capturing, parsing, and analysing screenshots and executing low-level actions via application programming interfaces (APIs), such as mouse clicks and keyboard inputs. This direct interaction with the OS significantly raises the stakes, as failures or manipulations can have immediate and tangible consequences. In this work, we uncover a novel attack vector against these OS agents: Malicious Image Patches (MIPs), adversarially perturbed screen regions that, when captured by an OS agent, induce it to perform harmful actions by exploiting specific APIs. For instance, a MIP can be embedded in a desktop wallpaper or shared on social media to cause an OS agent to exfiltrate sensitive user data. We show that MIPs generalise across user prompts and screen configurations, and that they can hijack multiple OS agents even during the execution of benign instructions. These findings expose critical security vulnerabilities in OS agents that have to be carefully addressed before their widespread deployment.

Key Contributions

This paper identifies a novel attack vector, Malicious Image Patches (MIPs), against multimodal OS agents. MIPs are adversarially perturbed image regions that, when processed by an OS agent, exploit specific APIs to induce harmful actions, such as data exfiltration, posing a significant security risk to AI-controlled systems.

Business Value

Crucial for developing secure AI agents and operating systems, preventing malicious actors from exploiting AI vulnerabilities to compromise user data and system integrity.