Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Reconstructing dynamic 3D urban scenes is crucial for autonomous driving, yet
current methods face a stark trade-off between fidelity and computational cost.
This inefficiency stems from their semantically agnostic design, which
allocates resources uniformly, treating static backgrounds and safety-critical
objects with equal importance. To address this, we introduce Priority-Adaptive
Gaussian Splatting (PAGS), a framework that injects task-aware semantic
priorities directly into the 3D reconstruction and rendering pipeline. PAGS
introduces two core contributions: (1) Semantically-Guided Pruning and
Regularization strategy, which employs a hybrid importance metric to
aggressively simplify non-critical scene elements while preserving fine-grained
details on objects vital for navigation. (2) Priority-Driven Rendering
pipeline, which employs a priority-based depth pre-pass to aggressively cull
occluded primitives and accelerate the final shading computations. Extensive
experiments on the Waymo and KITTI datasets demonstrate that PAGS achieves
exceptional reconstruction quality, particularly on safety-critical objects,
while significantly reducing training time and boosting rendering speeds to
over 350 FPS.
Key Contributions
This paper introduces Priority-Adaptive Gaussian Splatting (PAGS), a framework for reconstructing dynamic 3D urban scenes that optimizes for fidelity and computational cost by injecting task-aware semantic priorities. It employs semantically-guided pruning and a priority-driven rendering pipeline to efficiently reconstruct and render safety-critical objects with high detail.
Business Value
Enabling efficient and high-fidelity 3D reconstruction of dynamic driving scenes is crucial for the development and deployment of autonomous vehicles, improving their perception and navigation capabilities.