Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Rigorous testing of autonomous robots, such as self-driving vehicles, is
essential to ensure their safety in real-world deployments. This requires
building high-fidelity simulators to test scenarios beyond those that can be
safely or exhaustively collected in the real-world. Existing neural rendering
methods based on NeRF and 3DGS hold promise but suffer from low rendering
speeds or can only render pinhole camera models, hindering their suitability to
applications that commonly require high-distortion lenses and LiDAR data.
Multi-sensor simulation poses additional challenges as existing methods handle
cross-sensor inconsistencies by favoring the quality of one modality at the
expense of others. To overcome these limitations, we propose SimULi, the first
method capable of rendering arbitrary camera models and LiDAR data in
real-time. Our method extends 3DGUT, which natively supports complex camera
models, with LiDAR support, via an automated tiling strategy for arbitrary
spinning LiDAR models and ray-based culling. To address cross-sensor
inconsistencies, we design a factorized 3D Gaussian representation and
anchoring strategy that reduces mean camera and depth error by up to 40%
compared to existing methods. SimULi renders 10-20x faster than ray tracing
approaches and 1.5-10x faster than prior rasterization-based work (and handles
a wider range of camera models). When evaluated on two widely benchmarked
autonomous driving datasets, SimULi matches or exceeds the fidelity of existing
state-of-the-art methods across numerous camera and LiDAR metrics.
Authors (8)
Haithem Turki
Qi Wu
Xin Kang
Janick Martinez Esturo
Shengyu Huang
Ruilong Li
+2 more
Submitted
October 14, 2025
Key Contributions
SimULi is the first method capable of rendering arbitrary camera models and LiDAR data in real-time. It extends 3DGUT with LiDAR support via an automated tiling strategy, overcoming limitations of existing neural rendering methods that suffer from low speeds or restricted camera models, and addresses cross-sensor inconsistencies.
Business Value
Enables more comprehensive and efficient testing of autonomous driving systems by providing realistic, real-time simulation of diverse sensor data, including LiDAR and complex camera models, reducing the need for extensive real-world testing.