Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Detecting 3D objects accurately from multi-view 2D images is a challenging
yet essential task in the field of autonomous driving. Current methods resort
to integrating depth prediction to recover the spatial information for object
query decoding, which necessitates explicit supervision from LiDAR points
during the training phase. However, the predicted depth quality is still
unsatisfactory such as depth discontinuity of object boundaries and
indistinction of small objects, which are mainly caused by the sparse
supervision of projected points and the use of high-level image features for
depth prediction. Besides, cross-view consistency and scale invariance are also
overlooked in previous methods. In this paper, we introduce Frequency-aware
Positional Depth Embedding (FreqPDE) to equip 2D image features with spatial
information for 3D detection transformer decoder, which can be obtained through
three main modules. Specifically, the Frequency-aware Spatial Pyramid Encoder
(FSPE) constructs a feature pyramid by combining high-frequency edge clues and
low-frequency semantics from different levels respectively. Then the Cross-view
Scale-invariant Depth Predictor (CSDP) estimates the pixel-level depth
distribution with cross-view and efficient channel attention mechanism.
Finally, the Positional Depth Encoder (PDE) combines the 2D image features and
3D position embeddings to generate the 3D depth-aware features for query
decoding. Additionally, hybrid depth supervision is adopted for complementary
depth learning from both metric and distribution aspects. Extensive experiments
conducted on the nuScenes dataset demonstrate the effectiveness and superiority
of our proposed method.
Authors (7)
Haisheng Su
Junjie Zhang
Feixiang Song
Sanping Zhou
Wei Wu
Nanning Zheng
+1 more
Submitted
October 17, 2025
Key Contributions
Introduces Frequency-aware Positional Depth Embedding (FreqPDE) for multi-view 3D object detection transformers, which equips 2D image features with spatial information without explicit LiDAR supervision. It addresses issues of depth discontinuity, small object indistinction, cross-view consistency, and scale invariance.
Business Value
Enhances the perception capabilities of autonomous vehicles, leading to safer and more reliable navigation by improving the accuracy of 3D object detection and depth estimation from camera data.