Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
SynHLMA is a novel framework for synthesizing hand manipulation sequences for articulated objects based on language instructions. It uses a discrete Human-Object Interaction (HAOI) representation and a joint-aware loss to align grasping with language and ensure dynamic object joint variations are captured.
Enables more intuitive and versatile robotic manipulation, particularly for tasks involving complex objects, and enhances realism in VR/AR applications.