Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
SHARE (Scene-Human Aligned REconstruction) leverages scene geometry from monocular RGB video to accurately ground human motion reconstruction in 3D space. It iteratively refines human poses by aligning estimated meshes with scene-derived point maps, ensuring consistency.
Enables more realistic character interactions in virtual environments and improves the understanding of human actions for robots, enhancing immersion and utility in AR/VR and robotics.