Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The Codec Avatars Lab at Meta introduces Embody 3D, a multimodal dataset of
500 individual hours of 3D motion data from 439 participants collected in a
multi-camera collection stage, amounting to over 54 million frames of tracked
3D motion. The dataset features a wide range of single-person motion data,
including prompted motions, hand gestures, and locomotion; as well as
multi-person behavioral and conversational data like discussions, conversations
in different emotional states, collaborative activities, and co-living
scenarios in an apartment-like space. We provide tracked human motion including
hand tracking and body shape, text annotations, and a separate audio track for
each participant.
Authors (24)
Claire McLean
Makenzie Meendering
Tristan Swartz
Orri Gabbay
Alexandra Olsen
Rachel Jacobs
+18 more
Submitted
October 17, 2025
Key Contributions
Introduces Embody 3D, a large-scale multimodal dataset containing 500 hours of 3D motion data from 439 participants. The dataset includes a wide range of single-person motions, multi-person behaviors, conversational data, tracked 3D motion, body shape, hand tracking, text annotations, and audio.
Business Value
Accelerates development of realistic virtual avatars, immersive VR/AR experiences, and more natural human-robot interactions, crucial for the metaverse and advanced simulation technologies.