Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper Researchers in generative models,Computer vision engineers,3D artists,Video content creators 3 weeks ago

X2Video: Adapting Diffusion Models for Multimodal Controllable Neural Video Rendering

computer-vision › diffusion-models
📄 Abstract

Abstract: We present X2Video, the first diffusion model for rendering photorealistic videos guided by intrinsic channels including albedo, normal, roughness, metallicity, and irradiance, while supporting intuitive multi-modal controls with reference images and text prompts for both global and local regions. The intrinsic guidance allows accurate manipulation of color, material, geometry, and lighting, while reference images and text prompts provide intuitive adjustments in the absence of intrinsic information. To enable these functionalities, we extend the intrinsic-guided image generation model XRGB to video generation by employing a novel and efficient Hybrid Self-Attention, which ensures temporal consistency across video frames and also enhances fidelity to reference images. We further develop a Masked Cross-Attention to disentangle global and local text prompts, applying them effectively onto respective local and global regions. For generating long videos, our novel Recursive Sampling method incorporates progressive frame sampling, combining keyframe prediction and frame interpolation to maintain long-range temporal consistency while preventing error accumulation. To support the training of X2Video, we assembled a video dataset named InteriorVideo, featuring 1,154 rooms from 295 interior scenes, complete with reliable ground-truth intrinsic channel sequences and smooth camera trajectories. Both qualitative and quantitative evaluations demonstrate that X2Video can produce long, temporally consistent, and photorealistic videos guided by intrinsic conditions. Additionally, X2Video effectively accommodates multi-modal controls with reference images, global and local text prompts, and simultaneously supports editing on color, material, geometry, and lighting through parametric tuning. Project page: https://luckyhzt.github.io/x2video

Key Contributions

X2Video is the first diffusion model for rendering photorealistic videos guided by intrinsic channels and supporting multimodal controls. It introduces novel attention mechanisms (Hybrid Self-Attention and Masked Cross-Attention) to ensure temporal consistency and fidelity to reference images, enabling intuitive manipulation of color, material, geometry, and lighting.

Business Value

Enables creation of highly controllable and realistic video content for applications like film production, gaming, and virtual environments, reducing manual effort and increasing creative possibilities.