About
SignGPT is a landmark UKRI EPSRC Programme Grant aiming to create the world's first generative predictive transformer designed specifically for sign language. Spanning six years (2025–2031), the project tackles one of AI's most underserved challenges: unconstrained, bidirectional translation between British Sign Language (BSL) and English text or speech. The system fuses three disciplines — computer vision (to interpret and produce signing gestures), sign linguistics (to respect the grammatical and expressive structure of BSL), and cutting-edge machine learning (to enable fluid, generative translation). Unlike narrow, vocabulary-limited sign-recognition tools, SignGPT is designed to handle the full complexity and expressiveness of natural BSL. The research is led by an interdisciplinary consortium from the University of Surrey (CVSSP), University of Oxford, and University College London, with active participation from Deaf organisations and community stakeholders to ensure the technology is grounded in real-world needs. The project also produces educational resources including an AI Glossary of Terms in BSL and a Visual Language Toolkit (VLT). Key deliverables include open-access code, models, and datasets that will benefit the broader research community, accessibility tool developers, and public service providers. SignGPT represents a foundational step toward equitable, AI-driven communication tools for the Deaf and hard-of-hearing community, as well as for hearing individuals learning or working with BSL.
Key Features
- Bidirectional BSL–English Translation: Translates seamlessly in both directions — from BSL video to English and from English text to sign language output — enabling full two-way communication.
- Generative Predictive Transformer Architecture: Leverages a novel transformer model purpose-built for sign language, going beyond recognition to generative, context-aware translation of unconstrained signing.
- Computer Vision for Sign Interpretation: Uses advanced computer vision techniques to capture and interpret the full spatial and gestural complexity of BSL in video input.
- Sign Linguistics Integration: Incorporates formal sign linguistics to preserve the grammatical structure, prosody, and expressive nuance of BSL rather than treating it as a manual encoding of English.
- Community Co-Development & Educational Resources: Developed in partnership with Deaf organisations and includes public-facing resources such as a BSL AI Glossary and a Visual Language Toolkit (VLT).
Use Cases
- Translating recorded or live BSL video into written or spoken English for Deaf–hearing communication in public services, healthcare, and education.
- Generating sign language video or animation output from English text to assist hearing individuals in communicating with BSL users.
- Building accessibility tools and apps that integrate bidirectional BSL–English translation for organisations serving the Deaf community.
- Supporting BSL learners and educators with AI-driven feedback and translation tools grounded in formal sign linguistics.
- Advancing academic research in computer vision, NLP, and sign language processing by providing open-access models and datasets.
Pros
- Pioneering Research: The first large-scale effort to build a generative AI transformer specifically for sign language, addressing a significant gap in AI accessibility research.
- Strong Academic & Community Collaboration: Backed by three world-leading universities and co-designed with Deaf community partners, ensuring both technical rigour and real-world relevance.
- Open Science Commitment: Plans to release code, models, and datasets publicly, making findings available to developers, researchers, and accessibility tool builders worldwide.
- Holistic Approach: Combines computer vision, linguistics, and machine learning rather than treating sign language as a secondary or simplified modality.
Cons
- Still in Active Development: The project runs until 2031 and has not yet released code or working demos, so no tools are currently available for end-users or developers.
- BSL-Only Scope: Currently focused exclusively on British Sign Language; other sign languages (ASL, Auslan, etc.) are not covered by this project.
- Research Project, Not a Consumer Product: SignGPT is an academic programme grant rather than a polished product, meaning deployment timelines and user-facing interfaces are uncertain.
Frequently Asked Questions
SignGPT is a UKRI EPSRC Programme Grant (2025–2031) that is building the first generative predictive transformer for sign language, enabling unconstrained, bidirectional translation between British Sign Language (BSL) and English.
The project is led by an interdisciplinary team from the University of Surrey (CVSSP), the University of Oxford, and University College London, with active involvement from Deaf organisations and community partners.
The programme runs from May 2025 to April 2031. Code, models, and outputs are listed as 'TBA', so public releases are expected at various milestones throughout the grant period.
The current scope of the project is British Sign Language (BSL) and English. Other sign languages are not covered by this specific grant.
You can contact the project leads at [email protected] or [email protected] at CVSSP, University of Surrey, for more information about the project, collaboration opportunities, or joining the team.