Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Vehicle-to-everything (V2X) collaborative perception has emerged as a
promising solution to address the limitations of single-vehicle perception
systems. However, existing V2X datasets are limited in scope, diversity, and
quality. To address these gaps, we present Mixed Signals, a comprehensive V2X
dataset featuring 45.1k point clouds and 240.6k bounding boxes collected from
three connected autonomous vehicles (CAVs) equipped with two different
configurations of LiDAR sensors, plus a roadside unit with dual LiDARs. Our
dataset provides point clouds and bounding box annotations across 10 classes,
ensuring reliable data for perception training. We provide detailed statistical
analysis on the quality of our dataset and extensively benchmark existing V2X
methods on it. The Mixed Signals dataset is ready-to-use, with precise
alignment and consistent annotations across time and viewpoints. Dataset
website is available at https://mixedsignalsdataset.cs.cornell.edu/.