Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The validation of LiDAR-based perception of intelligent mobile systems
operating in open-world applications remains a challenge due to the variability
of real environmental conditions. Virtual simulations allow the generation of
arbitrary scenes under controlled conditions but lack physical sensor
characteristics, such as intensity responses or material-dependent effects. In
contrast, real-world data offers true sensor realism but provides less control
over influencing factors, hindering sufficient validation. Existing approaches
address this problem with augmentation of real-world point cloud data by
transferring objects between scenes. However, these methods do not consider
validation and remain limited in controllability because they rely on empirical
data. We solve these limitations by proposing Point Cloud Recombination, which
systematically augments captured point cloud scenes by integrating point clouds
acquired from physical target objects measured in controlled laboratory
environments. Thus enabling the creation of vast amounts and varieties of
repeatable, physically accurate test scenes with respect to phenomena-aware
occlusions with registered 3D meshes. Using the Ouster OS1-128 Rev7 sensor, we
demonstrate the augmentation of real-world urban and rural scenes with humanoid
targets featuring varied clothing and poses, for repeatable positioning. We
show that the recombined scenes closely match real sensor outputs, enabling
targeted testing, scalable failure analysis, and improved system safety. By
providing controlled yet sensor-realistic data, our method enables trustworthy
conclusions about the limitations of specific sensors in compound with their
algorithms, e.g., object detection.