Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: While text-to-3D generation has attracted growing interest, existing methods
often struggle to produce 3D assets that align well with human preferences.
Current preference alignment techniques for 3D content typically rely on
hardly-collected preference-paired multi-view 2D images to train 2D reward
models, when then guide 3D generation -- leading to geometric artifacts due to
their inherent 2D bias. To address these limitations, we construct 3D-MeshPref,
the first large-scale unpaired 3D preference dataset, featuring diverse 3D
meshes annotated by a large language model and refined by human evaluators. We
then develop RewardCS, the first reward model trained directly on unpaired
3D-MeshPref data using a novel Cauchy-Schwarz divergence objective, enabling
effective learning of human-aligned 3D geometric preferences without requiring
paired comparisons. Building on this, we propose DreamCS, a unified framework
that integrates RewardCS into text-to-3D pipelines -- enhancing both implicit
and explicit 3D generation with human preference feedback. Extensive
experiments show DreamCS outperforms prior methods, producing 3D assets that
are both geometrically faithful and human-preferred. Code and models will be
released publicly.
Key Contributions
Introduces DreamCS, a text-to-3D generation framework that uses RewardCS, a novel reward model trained on unpaired 3D data (3D-MeshPref dataset) using a Cauchy-Schwarz divergence objective. This enables learning human-aligned 3D geometric preferences without paired comparisons, leading to improved 3D asset quality and reduced geometric artifacts.
Business Value
Accelerates 3D content creation for industries like gaming, VR/AR, and product design by enabling users to generate high-quality, preference-aligned 3D assets from text descriptions, reducing manual effort and cost.