Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: In Augmented Reality (AR), virtual content enhances user experience by
providing additional information. However, improperly positioned or designed
virtual content can be detrimental to task performance, as it can impair users'
ability to accurately interpret real-world information. In this paper we
examine two types of task-detrimental virtual content: obstruction attacks, in
which virtual content prevents users from seeing real-world objects, and
information manipulation attacks, in which virtual content interferes with
users' ability to accurately interpret real-world information. We provide a
mathematical framework to characterize these attacks and create a custom
open-source dataset for attack evaluation. To address these attacks, we
introduce ViDDAR (Vision language model-based Task-Detrimental content Detector
for Augmented Reality), a comprehensive full-reference system that leverages
Vision Language Models (VLMs) and advanced deep learning techniques to monitor
and evaluate virtual content in AR environments, employing a user-edge-cloud
architecture to balance performance with low latency. To the best of our
knowledge, ViDDAR is the first system to employ VLMs for detecting
task-detrimental content in AR settings. Our evaluation results demonstrate
that ViDDAR effectively understands complex scenes and detects task-detrimental
content, achieving up to 92.15% obstruction detection accuracy with a detection
latency of 533 ms, and an 82.46% information manipulation content detection
accuracy with a latency of 9.62 s.