background
logo
ArxivPaperAI

Taming Diffusion Models for Music-driven Conducting Motion Generation

Author:
Zhuoran Zhao, Jinbin Bai, Delong Chen, Debang Wang, Yubo Pan
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Artificial Intelligence (cs.AI), Multimedia (cs.MM), Sound (cs.SD)
journal:
--
date:
2023-06-14 16:00:00
Abstract
Generating the motion of orchestral conductors from a given piece of symphony music is a challenging task since it requires a model to learn semantic music features and capture the underlying distribution of real conducting motion. Prior works have applied Generative Adversarial Networks (GAN) to this task, but the promising diffusion model, which recently showed its advantages in terms of both training stability and output quality, has not been exploited in this context. This paper presents Diffusion-Conductor, a novel DDIM-based approach for music-driven conducting motion generation, which integrates the diffusion model to a two-stage learning framework. We further propose a random masking strategy to improve the feature robustness, and use a pair of geometric loss functions to impose additional regularizations and increase motion diversity. We also design several novel metrics, including Frechet Gesture Distance (FGD) and Beat Consistency Score (BC) for a more comprehensive evaluation of the generated motion. Experimental results demonstrate the advantages of our model.
PDF: Taming Diffusion Models for Music-driven Conducting Motion Generation.pdf
Empowered by ChatGPT