Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: While generative models for music composition are increasingly capable, their
adoption by musicians is hindered by text-prompting, an asynchronous workflow
disconnected from the embodied, responsive nature of instrumental performance.
To address this, we introduce Aria-Duet, an interactive system facilitating a
real-time musical duet between a human pianist and Aria, a state-of-the-art
generative model, using a Yamaha Disklavier as a shared physical interface. The
framework enables a turn-taking collaboration: the user performs, signals a
handover, and the model generates a coherent continuation performed
acoustically on the piano. Beyond describing the technical architecture
enabling this low-latency interaction, we analyze the system's output from a
musicological perspective, finding the model can maintain stylistic semantics
and develop coherent phrasal ideas, demonstrating that such embodied systems
can engage in musically sophisticated dialogue and open a promising new path
for human-AI co-creation.
Key Contributions
Aria-Duet is an interactive system facilitating a real-time musical duet between a human pianist and an AI model (Aria) using a Yamaha Disklavier. It enables turn-taking collaboration with low-latency interaction, allowing the model to generate coherent continuations that maintain stylistic semantics and phrasal ideas, opening new avenues for human-AI musical co-creativity.
Business Value
Opens new possibilities for musical creation and performance, potentially leading to novel interactive entertainment experiences, tools for musicians, and new forms of artistic expression. It could also inspire new applications in AI-assisted creativity.