A

AIOZ-GDANCE

open_source

AIOZ-GDANCE is a large-scale open-source dataset and AI model for generating coherent group dance choreographies from music, covering 7 dance styles and 16 music genres. Published at CVPR 2023.

About

AIOZ-GDANCE is a pioneering academic research project from AIOZ AI and the University of Liverpool, presented at CVPR 2023. It addresses the challenging open problem of music-driven group dance generation — going beyond existing single-dancer synthesis methods to support coherent multi-person choreography. The project introduces the GDANCE dataset, a large-scale collection of 16.7 hours of whole-body 3D motion paired with music audio, sourced from in-the-wild group dance videos. The dataset covers 7 distinct dance styles and 16 music genres, with videos ranging from 15 to 60 seconds, split into train (80%), validation (10%), and test (10%) sets. A semi-autonomous labeling pipeline with human-in-the-loop verification was used to generate high-quality 3D ground truth annotations. Alongside the dataset, the researchers propose a novel group dance generation model that takes a music sequence and initial dancer positions as input, then auto-regressively generates multiple synchronized choreographies that are both musically attuned and spatially consistent — avoiding common failure modes like motion inconsistency and inter-dancer collisions. New evaluation metrics for group dance quality are also introduced. The code and dataset are released publicly to facilitate future research in AI choreography, computer animation, human motion synthesis, and entertainment technology.

Key Features

  • Large-Scale Group Dance Dataset: 16.7 hours of paired music and 3D whole-body motion data from in-the-wild group dance videos, split into train, validation, and test sets.
  • Multi-Style & Multi-Genre Coverage: Dataset spans 7 dance styles and 16 music genres, providing broad diversity for training and evaluation of choreography models.
  • Group-Coherent Dance Generation Model: Baseline model takes a music sequence and dancer positions as input and auto-regressively generates synchronized, collision-free multi-person choreographies.
  • Semi-Autonomous 3D Labeling Pipeline: Human-in-the-loop annotation method ensures high-quality 3D ground truth labels extracted from in-the-wild video footage.
  • New Group Dance Evaluation Metrics: Introduces dedicated metrics for assessing group dance quality including motion coherence and inter-dancer spatial consistency.

Use Cases

  • Academic research into music-driven human motion synthesis and group choreography generation
  • Training and benchmarking AI models for multi-person dance animation in games and virtual environments
  • Generating synchronized group dance sequences for film, music video production, or virtual performances
  • Studying spatial coherence and collision avoidance in multi-agent motion generation systems
  • Developing AI tools for entertainment, education, or fitness applications that involve group movement coordination

Pros

  • First-of-Its-Kind Group Dance Dataset: Fills a critical gap in the field — prior datasets only supported single-dancer generation, making GDANCE uniquely valuable for group choreography research.
  • Fully Open Source: Both the dataset and model code are publicly released, enabling the broader research community to build upon the work freely.
  • High-Quality 3D Annotations: Semi-autonomous labeling with human oversight produces reliable 3D ground truth from real-world videos rather than controlled studio capture.

Cons

  • Research Prototype, Not a Production Tool: AIOZ-GDANCE is an academic baseline model and dataset, not a polished application — integration into real-world pipelines requires significant engineering effort.
  • Limited to 3D Pose Output: The model generates 3D motion sequences rather than rendered video, requiring additional rendering or animation pipelines to produce visual output.
  • Constrained Dance Style Coverage: While diverse, the dataset's 7 dance styles and 16 genres may not cover all choreographic traditions or niche use cases.

Frequently Asked Questions

What is AIOZ-GDANCE?

AIOZ-GDANCE is a large-scale dataset and baseline AI model for generating group dance choreographies from music, introduced at CVPR 2023 by AIOZ AI and the University of Liverpool.

How is GDANCE different from other dance generation datasets?

Unlike existing datasets that only support single-dancer generation, GDANCE specifically contains group dance videos with 3D annotations, enabling the study and benchmarking of multi-person coherent choreography.

How large is the GDANCE dataset?

The dataset contains 16.7 hours of paired music and 3D motion data (approximately 1.8 million frames), covering 7 dance styles and 16 music genres, split 80/10/10 across train, validation, and test sets.

Is the code and dataset freely available?

Yes, both the GDANCE dataset and the model code are publicly released to support future research in group dance generation and music-driven choreography.

What input does the group dance generation model require?

The model takes a music audio sequence and a set of initial 3D positions of dancers as input, then auto-regressively generates coherent group dance motion sequences synchronized to the music.

Reviews

No reviews yet. Be the first to review this tool.

Alternatives

See all