Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recent NeRF methods on large-scale scenes have underlined the importance of
scene decomposition for scalable NeRFs. Although achieving reasonable
scalability, there are several critical problems remaining unexplored, i.e.,
learnable decomposition, modeling scene heterogeneity, and modeling efficiency.
In this paper, we introduce Switch-NeRF++, a Heterogeneous Mixture of Hash
Experts (HMoHE) network that addresses these challenges within a unified
framework. It is a highly scalable NeRF that learns heterogeneous decomposition
and heterogeneous NeRFs efficiently for large-scale scenes in an end-to-end
manner. In our framework, a gating network learns to decompose scenes and
allocates 3D points to specialized NeRF experts. This gating network is
co-optimized with the experts by our proposed Sparsely Gated Mixture of Experts
(MoE) NeRF framework. We incorporate a hash-based gating network and distinct
heterogeneous hash experts. The hash-based gating efficiently learns the
decomposition of the large-scale scene. The distinct heterogeneous hash experts
consist of hash grids of different resolution ranges, enabling effective
learning of the heterogeneous representation of different scene parts. These
design choices make our framework an end-to-end and highly scalable NeRF
solution for real-world large-scale scene modeling to achieve both quality and
efficiency. We evaluate our accuracy and scalability on existing large-scale
NeRF datasets and a new dataset with very large-scale scenes ($>6.5km^2$) from
UrbanBIS. Extensive experiments demonstrate that our approach can be easily
scaled to various large-scale scenes and achieve state-of-the-art scene
rendering accuracy. Furthermore, our method exhibits significant efficiency,
with an 8x acceleration in training and a 16x acceleration in rendering
compared to Switch-NeRF. Codes will be released at
https://github.com/MiZhenxing/Switch-NeRF.