Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We introduce a lightweight, real-time motion recognition system that enables
synergic human-machine performance through wearable IMU sensor data, MiniRocket
time-series classification, and responsive multimedia control. By mapping
dancer-specific movement to sound through somatic memory and association, we
propose an alternative approach to human-machine collaboration, one that
preserves the expressive depth of the performing body while leveraging machine
learning for attentive observation and responsiveness. We demonstrate that this
human-centered design reliably supports high accuracy classification (<50 ms
latency), offering a replicable framework to integrate dance-literate machines
into creative, educational, and live performance contexts.
Key Contributions
Introduces a lightweight, real-time motion recognition system using IMU sensors and the MiniRocket algorithm for time-series classification. This system enables synergistic human-machine performance by mapping dancer-specific movements to sound and responsive multimedia control, preserving expressive depth while leveraging machine learning for attentive responsiveness.
Business Value
Opens new avenues for artistic expression and interactive experiences, enabling performers to collaborate dynamically with technology in live settings, potentially creating novel entertainment and educational tools.