Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
π Abstract
Abstract: With the rapid advancement of mobile imaging, capturing screens using
smartphones has become a prevalent practice in distance learning and conference
recording. However, moir\'e artifacts, caused by frequency aliasing between
display screens and camera sensors, are further amplified by the image signal
processing pipeline, leading to severe visual degradation. Existing sRGB domain
demoir\'eing methods struggle with irreversible information loss, while recent
two-stage raw domain approaches suffer from information bottlenecks and
inference inefficiency. To address these limitations, we propose a single-stage
raw domain demoir\'eing framework, Dual-Stream Demoir\'eing Network (DSDNet),
which leverages the synergy of raw and YCbCr images to remove moir\'e while
preserving luminance and color fidelity. Specifically, to guide luminance
correction and moir\'e removal, we design a raw-to-YCbCr mapping pipeline and
introduce the Synergic Attention with Dynamic Modulation (SADM) module. This
module enriches the raw-to-sRGB conversion with cross-domain contextual
features. Furthermore, to better guide color fidelity, we develop a
Luminance-Chrominance Adaptive Transformer (LCAT), which decouples luminance
and chrominance representations. Extensive experiments demonstrate that DSDNet
outperforms state-of-the-art methods in both visual quality and quantitative
evaluation and achieves an inference speed $\mathrm{\textbf{2.4x}}$ faster than
the second-best method, highlighting its practical advantages. We provide an
anonymous online demo at https://xxxxxxxxdsdnet.github.io/DSDNet/.