About
LuminAI is a research-driven interactive art installation from Georgia Tech's Expressive Machinery Lab that pairs human participants with an artificially intelligent virtual dance partner for real-time collaborative movement improvisation. Rather than simply mirroring or scripting responses, the AI agent analyzes participant movements through procedural representations of Viewpoints movement theory — a framework drawn from theater and dance — and improvises responses by drawing on transformed memories of past interactions with real people. In essence, the agent learns how to dance by dancing with humans. The project is actively being extended to incorporate Laban movement theory as an additional lens for understanding gesture, and researchers are building machine learning toolkits to visualize how the agent clusters and categorizes similar gestures during learning. LuminAI also serves as a platform for public engagement with AI in informal learning environments, increasing computational literacy and awareness around creative AI. The installation has culminated in world-first live performances — including a human-AI dance collaboration featuring students from the Kennesaw State University School of Dance — demonstrating that the technology can move beyond the lab into public artistic contexts. LuminAI is best suited for researchers, educators, artists, and curious members of the public who want to explore the expressive, social, and playful dimensions of AI through embodied interaction. It represents a pioneering effort to demonstrate that humans and machines can co-create experiences as genuine equals.
Key Features
- Real-Time Movement Improvisation: The AI virtual agent analyzes and responds to participant movements in real time, generating improvised dance responses without pre-scripted choreography.
- Viewpoints Movement Theory Engine: Uses procedural representations of the Viewpoints framework from theater and dance to understand and reason about human gesture and spatial movement.
- Memory-Based Learning: The agent builds a library of transformed memories from past interactions with humans, enabling it to grow more expressive and contextually aware over time.
- Laban Movement Theory Integration: Ongoing research extends the system with Laban movement theory as an alternative analytical framework for deeper gesture understanding.
- AI Literacy & Public Engagement: Designed to be deployed in informal learning environments to demystify AI and improve computational literacy through playful, embodied interaction.
Use Cases
- Interactive museum or gallery installations that invite visitors to explore AI creativity through physical movement and dance.
- University and K-12 educational settings where students learn about AI, machine learning, and computational creativity through hands-on embodied interaction.
- Public AI literacy events and science festivals designed to make artificial intelligence approachable and engaging for non-technical audiences.
- Live artistic performances that blend human choreography with AI improvisation to create unique, unrepeatable collaborative experiences.
- HCI and AI research contexts for studying how humans perceive, interact with, and co-create alongside non-human intelligent agents.
Pros
- Pioneering Human-AI Co-Creativity: One of the few systems that treats AI as a genuine creative equal rather than a tool, enabling authentic collaborative artistic expression.
- Grounded in Established Movement Theory: Anchoring the AI in Viewpoints and Laban theory gives its responses artistic coherence and makes interactions feel intentional rather than random.
- Accessible & Playful: Designed for a broad public audience, the installation is approachable and fun, lowering barriers to engaging with AI concepts.
Cons
- Requires Physical Installation Setup: LuminAI is an art installation rather than a downloadable product, limiting access to venues with the necessary hardware and space.
- Highly Niche Application: As a research and artistic project, it is not a general-purpose AI tool and cannot be repurposed for typical productivity or commercial use cases.
- Limited Public Availability: Deployment opportunities are tied to academic events and curated performances, making it difficult for the general public to experience regularly.
Frequently Asked Questions
LuminAI is an interactive art installation from Georgia Tech's Expressive Machinery Lab where participants improvise movement with an AI-powered virtual dance partner that learns from and responds to their gestures in real time.
The AI agent builds a memory of past interactions with real people and uses procedural representations of Viewpoints movement theory to analyze gestures and improvise responses — essentially learning by dancing with humans.
LuminAI is built on Viewpoints movement theory, a framework from theater and dance. Ongoing research is also integrating Laban movement theory as an alternative way for the agent to understand and categorize gestures.
LuminAI was created by researchers at Georgia Tech's Expressive Machinery Lab, led by Professor Brian Magerko, with contributions from collaborators including Mikhail Jacob, Duri Long, and others. The project has received NSF funding.
LuminAI has been exhibited at public events and live performances, such as the 2024 human-AI dance performance at the KSU Dance Theater. Check the Expressive Machinery Lab website for upcoming events and exhibitions.