Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper 3D Vision Researchers,Robotics Engineers,Computer Graphics Developers 3 weeks ago

SpatialSplat: Efficient Semantic 3D from Sparse Unposed Images

computer-vision › 3d-vision
📄 Abstract

Abstract: A major breakthrough in 3D reconstruction is the feedforward paradigm to generate pixel-wise 3D points or Gaussian primitives from sparse, unposed images. To further incorporate semantics while avoiding the significant memory and storage costs of high-dimensional semantic features, existing methods extend this paradigm by associating each primitive with a compressed semantic feature vector. However, these methods have two major limitations: (a) the naively compressed feature compromises expressiveness, affecting the model's ability to capture fine-grained semantics, and (b) the pixel-wise primitive prediction introduces redundancy in overlapping areas, causing unnecessary memory overhead. To this end, we introduce \textbf{SpatialSplat}, a feedforward framework that produces redundancy-aware Gaussians and capitalizes on a dual-field semantic representation. Particularly, with the insight that primitives within the same instance exhibit high semantic consistency, we decompose the semantic representation into a coarse feature field that encodes uncompressed semantics with minimal primitives, and a fine-grained yet low-dimensional feature field that captures detailed inter-instance relationships. Moreover, we propose a selective Gaussian mechanism, which retains only essential Gaussians in the scene, effectively eliminating redundant primitives. Our proposed Spatialsplat learns accurate semantic information and detailed instances prior with more compact 3D Gaussians, making semantic 3D reconstruction more applicable. We conduct extensive experiments to evaluate our method, demonstrating a remarkable 60\% reduction in scene representation parameters while achieving superior performance over state-of-the-art methods. The code is available at https://github.com/shengyuuu/SpatialSplat.git

Key Contributions

SpatialSplat introduces a feedforward framework for 3D reconstruction from sparse, unposed images that addresses limitations in semantic expressiveness and memory overhead. It achieves this by predicting redundancy-aware Gaussians and employing a novel dual-field semantic representation, which better captures fine-grained semantics and reduces redundancy in overlapping areas.

Business Value

Enables more efficient and semantically rich 3D reconstruction from limited visual data, which can be valuable for applications like virtual reality content creation, autonomous navigation, and industrial inspection.