Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Although recent text-to-video generative models are getting more capable of
following external camera controls, imposed by either text descriptions or
camera trajectories, they still struggle to generalize to unconventional camera
motions, which is crucial in creating truly original and artistic videos. The
challenge lies in the difficulty of finding sufficient training videos with the
intended uncommon camera motions. To address this challenge, we propose
VividCam, a training paradigm that enables diffusion models to learn complex
camera motions from synthetic videos, releasing the reliance on collecting
realistic training videos. VividCam incorporates multiple disentanglement
strategies that isolates camera motion learning from synthetic appearance
artifacts, ensuring more robust motion representation and mitigating domain
shift. We demonstrate that our design synthesizes a wide range of precisely
controlled and complex camera motions using surprisingly simple synthetic data.
Notably, this synthetic data often consists of basic geometries within a
low-poly 3D scene and can be efficiently rendered by engines like Unity. Our
video results can be found in https://wuqiuche.github.io/VividCamDemoPage/ .
Authors (6)
Qiucheng Wu
Handong Zhao
Zhixin Shu
Jing Shi
Yang Zhang
Shiyu Chang
Submitted
October 28, 2025
Key Contributions
VividCam introduces a novel training paradigm for diffusion models to learn complex and unconventional camera motions from synthetic videos, overcoming the reliance on real-world data. It employs disentanglement strategies to isolate motion learning from appearance artifacts, enabling more robust motion representation and mitigating domain shift for artistic video creation.
Business Value
Enables creators to produce more original and artistic videos with precise control over camera movements, opening new possibilities for filmmaking, advertising, and virtual content creation.