Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Dataset Paper Researchers in computer vision,Robotics engineers,VR/AR developers,Animators,HCI researchers 2 weeks ago

Embody 3D: A Large-scale Multimodal Motion and Behavior Dataset

computer-vision › 3d-vision
📄 Abstract

Abstract: The Codec Avatars Lab at Meta introduces Embody 3D, a multimodal dataset of 500 individual hours of 3D motion data from 439 participants collected in a multi-camera collection stage, amounting to over 54 million frames of tracked 3D motion. The dataset features a wide range of single-person motion data, including prompted motions, hand gestures, and locomotion; as well as multi-person behavioral and conversational data like discussions, conversations in different emotional states, collaborative activities, and co-living scenarios in an apartment-like space. We provide tracked human motion including hand tracking and body shape, text annotations, and a separate audio track for each participant.
Authors (24)
Claire McLean
Makenzie Meendering
Tristan Swartz
Orri Gabbay
Alexandra Olsen
Rachel Jacobs
+18 more
Submitted
October 17, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Introduces Embody 3D, a large-scale multimodal dataset containing 500 hours of 3D motion data from 439 participants. The dataset includes a wide range of single-person motions, multi-person behaviors, conversational data, tracked 3D motion, body shape, hand tracking, text annotations, and audio.

Business Value

Accelerates development of realistic virtual avatars, immersive VR/AR experiences, and more natural human-robot interactions, crucial for the metaverse and advanced simulation technologies.