Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Occupancy prediction aims to estimate the 3D spatial distribution of occupied
regions along with their corresponding semantic labels. Existing vision-based
methods perform well on daytime benchmarks but struggle in nighttime scenarios
due to limited visibility and challenging lighting conditions. To address these
challenges, we propose LIAR, a novel framework that learns illumination-affined
representations. LIAR first introduces Selective Low-light Image Enhancement
(SLLIE), which leverages the illumination priors from daytime scenes to
adaptively determine whether a nighttime image is genuinely dark or
sufficiently well-lit, enabling more targeted global enhancement. Building on
the illumination maps generated by SLLIE, LIAR further incorporates two
illumination-aware components: 2D Illumination-guided Sampling (2D-IGS) and 3D
Illumination-driven Projection (3D-IDP), to respectively tackle local
underexposure and overexposure. Specifically, 2D-IGS modulates feature sampling
positions according to illumination maps, assigning larger offsets to darker
regions and smaller ones to brighter regions, thereby alleviating feature
degradation in underexposed areas. Subsequently,3D-IDP enhances semantic
understanding in overexposed regions by constructing illumination intensity
fields and supplying refined residual queries to the BEV context refinement
process. Extensive experiments on both real and synthetic datasets demonstrate
the superior performance of LIAR under challenging nighttime scenarios. The
source code and pretrained models are available
[here](https://github.com/yanzq95/LIAR).
Authors (5)
Yuan Wu
Zhiqiang Yan
Yigong Zhang
Xiang Li
Jian Yang
Key Contributions
LIAR is a novel framework designed to learn illumination-affined representations for nighttime occupancy prediction. It introduces SLLIE for targeted enhancement based on illumination priors, and illumination-aware components (2D-IGS, 3D-IDP) to address local underexposure and overexposure, significantly improving perception in challenging nighttime conditions.
Business Value
Crucial for enabling reliable autonomous driving and robotic navigation in all weather and lighting conditions, enhancing safety and operational capabilities.