Music2Dance: DanceNet for Music-Driven Dance Generation
Citations Over TimeTop 1% of 2022 papers
Abstract
Synthesize human motions from music (i.e., music to dance) is appealing and has attracted lots of research interests in recent years. It is challenging because of the requirement for realistic and complex human motions for dance, but more importantly, the synthesized motions should be consistent with the style, rhythm, and melody of the music. In this article, we propose a novel autoregressive generative model, DanceNet, to take the style, rhythm, and melody of music as the control signals to generate 3D dance motions with high realism and diversity. Due to the high long-term spatio-temporal complexity of dance, we propose the dilated convolution to improve the receptive field, and adopt the gated activation unit as well as separable convolution to enhance the fusion of motion features and control signals. To boost the performance of our proposed model, we capture several synchronized music-dance pairs by professional dancers and build a high-quality music-dance pair dataset. Experiments have demonstrated that the proposed method can achieve state-of-the-art results.
Related Papers
- 'Language of Dance' With Special Attention to the Bharata Natyam and Dance Choreography in Contemporary Times(2012)
- On the Choreography of the Campus Dance(2002)
- 마사 그레이엄의 1940년대 극무용에 나타난 안무성향 연구(2006)
- → Time and Space Interpretation of Wu Xiaobang’s Dance Education Practice(2018)
- → Contemporary Dance Choreography and Creation Strategy with the Development of Immersive Dance Theatre(2023)