Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Object detection and segmentation are widely employed in computer vision
applications, yet conventional models like YOLO series, while efficient and
accurate, are limited by predefined categories, hindering adaptability in open
scenarios. Recent open-set methods leverage text prompts, visual cues, or
prompt-free paradigm to overcome this, but often compromise between performance
and efficiency due to high computational demands or deployment complexity. In
this work, we introduce YOLOE, which integrates detection and segmentation
across diverse open prompt mechanisms within a single highly efficient model,
achieving real-time seeing anything. For text prompts, we propose
Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines
pretrained textual embeddings via a re-parameterizable lightweight auxiliary
network and enhances visual-textual alignment with zero inference and
transferring overhead. For visual prompts, we present Semantic-Activated Visual
Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches
to bring improved visual embedding and accuracy with minimal complexity. For
prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy.
It utilizes a built-in large vocabulary and specialized embedding to identify
all objects, avoiding costly language model dependency. Extensive experiments
show YOLOE's exceptional zero-shot performance and transferability with high
inference efficiency and low training cost. Notably, on LVIS, with 3$\times$
less training cost and 1.4$\times$ inference speedup, YOLOE-v8-S surpasses
YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6
AP$^b$ and 0.4 AP$^m$ gains over closed-set YOLOv8-L with nearly 4$\times$ less
training time. Code and models are available at
https://github.com/THU-MIG/yoloe.
Authors (6)
Ao Wang
Lihao Liu
Hui Chen
Zijia Lin
Jungong Han
Guiguang Ding
Key Contributions
Introduces YOLOE, a highly efficient model for real-time 'seeing anything' (detection and segmentation across diverse open prompts). It proposes RepRTA for efficient visual-textual alignment with text prompts and Semantic-Activated Visual Prompt Encoding for visual prompts, achieving strong performance without significant inference overhead.
Business Value
Enables real-time perception capabilities for a wider range of objects and scenarios, crucial for applications like autonomous driving, robotics, and dynamic surveillance systems, improving safety and automation.