Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Scene reconstruction from casually captured videos has wide applications in
real-world scenarios. With recent advancements in differentiable rendering
techniques, several methods have attempted to simultaneously optimize scene
representations (NeRF or 3DGS) and camera poses. Despite recent progress,
existing methods relying on traditional camera input tend to fail in high-speed
(or equivalently low-frame-rate) scenarios. Event cameras, inspired by
biological vision, record pixel-wise intensity changes asynchronously with high
temporal resolution, providing valuable scene and motion information in blind
inter-frame intervals. In this paper, we introduce the event camera to aid
scene construction from a casually captured video for the first time, and
propose Event-Aided Free-Trajectory 3DGS, called EF-3DGS, which seamlessly
integrates the advantages of event cameras into 3DGS through three key
components. First, we leverage the Event Generation Model (EGM) to fuse events
and frames, supervising the rendered views observed by the event stream.
Second, we adopt the Contrast Maximization (CMax) framework in a piece-wise
manner to extract motion information by maximizing the contrast of the Image of
Warped Events (IWE), thereby calibrating the estimated poses. Besides, based on
the Linear Event Generation Model (LEGM), the brightness information encoded in
the IWE is also utilized to constrain the 3DGS in the gradient domain. Third,
to mitigate the absence of color information of events, we introduce
photometric bundle adjustment (PBA) to ensure view consistency across events
and frames. We evaluate our method on the public Tanks and Temples benchmark
and a newly collected real-world dataset, RealEv-DAVIS. Our project page is
https://lbh666.github.io/ef-3dgs/.
Authors (8)
Bohao Liao
Wei Zhai
Zengyu Wan
Zhixin Cheng
Wenfei Yang
Tianzhu Zhang
+2 more
Submitted
October 20, 2024
Key Contributions
EF-3DGS is the first method to integrate event cameras into 3D Gaussian Splatting for scene reconstruction. It leverages the high temporal resolution and asynchronous nature of event data to overcome limitations of traditional cameras in high-speed scenarios, enabling more robust and accurate reconstruction.
Business Value
Improves the reliability of 3D scene reconstruction for applications requiring high-speed motion capture, such as autonomous driving, robotics, and high-performance sports analysis.