About
SignAI is a pioneering Deaf-led AI initiative founded by Joel Kellhofer MBE, a Deaf entrepreneur with over a decade of experience building products for the Deaf community, including SignLive and the Sign Dictionary. Active since 2021, SignAI explores two core AI workflows for British Sign Language (BSL) accessibility: sign-to-text and text-to-sign generation. The sign-to-text pipeline uses temporal modelling of hand shape, facial expression, gaze, and body movement to produce text or structured language representations via pose estimation pipelines and transformer-based sequence models. The text-to-sign generation workflow renders signed output through avatar or video-based interfaces, carefully aligned to BSL grammar and structure rather than word-for-word substitution. SignAI adopts a hybrid AI + human model philosophy, intended to complement live interpreters and Video Relay/Remote Interpreting (VRS/VRI) services—bridging the gap in everyday communication moments that cannot wait for human availability. The project is grounded in years of real-world dataset collection, motion capture studio workflows, and prototype iteration dating back to March 2021, giving it an unusually authentic foundation in Deaf-community needs. SignWow, a related Deaf-led interpreting and accessibility service, provides commercial insight informing the longer-term SignAI vision. The initiative is suitable for accessibility researchers, assistive technology developers, Deaf advocacy organisations, and enterprise teams seeking to embed sign language access into their products.
Key Features
- Sign-to-Text Recognition: Uses temporal modelling of hand shape, facial expression, gaze, and body movement via pose estimation pipelines and transformer-based sequence models to convert BSL into readable text.
- Text-to-Sign Generation: Renders signed output through avatar or video-based interfaces aligned to BSL grammar and structure, going beyond literal word-for-word substitution for natural signing.
- Hybrid AI + Human Model: Designed to complement, not replace, live BSL interpreters and VRS/VRI services—filling everyday communication gaps where human interpreters aren't immediately available.
- Motion Capture Datasets: Built on structured sign language training data captured in studio workflows tracking hand, face, and full-body movement since 2021, creating high-quality BSL-specific datasets.
- BSL-Aligned Grammar Engine: Generation pipelines focus on BSL grammatical structure and sign clarity rather than transliterating English word-for-word, resulting in more natural and accurate signed output.
Use Cases
- Providing real-time sign language captioning for Deaf individuals in environments where a human interpreter is not immediately available, such as GP appointments or customer service interactions.
- Enabling businesses and public sector organisations to add automated BSL accessibility features to video communication platforms, apps, and self-service kiosks.
- Supporting Deaf education platforms by generating avatar-based signed explanations from written educational content aligned to BSL grammar.
- Assisting assistive technology researchers and accessibility developers in building sign language recognition features using SignAI's datasets and model architectures.
- Augmenting VRS/VRI services during peak demand periods by handling routine or formulaic communication exchanges through AI-generated signed responses.
Pros
- Authentically Deaf-Led: Founded and driven by Joel Kellhofer MBE, a Deaf entrepreneur with deep lived experience and over a decade building Deaf community technology, ensuring genuine community alignment.
- Multi-Year Research Foundation: Unlike many AI accessibility startups, SignAI has documented prototype work and dataset collection dating back to March 2021, providing a credible and substantial research base.
- Complementary to Human Interpreters: The hybrid model philosophy respects the role of professional BSL interpreters while extending access to everyday moments, making it a responsible and practical approach to accessibility AI.
Cons
- Still in Research/Prototype Phase: SignAI is an exploratory initiative rather than a fully launched commercial product, meaning end-users cannot yet access a polished, deployable tool.
- BSL-Specific Scope: Current work is focused on British Sign Language, limiting immediate applicability for users and organisations working with ASL, Auslan, or other sign language variants.
- Limited Public Documentation: Technical details, API access, and integration documentation are not yet publicly available, making it difficult for developers to evaluate or build on the platform.
Frequently Asked Questions
SignAI is a Deaf-led AI initiative exploring real-time sign language recognition and generation technology. It was founded by Joel Kellhofer MBE, a Deaf entrepreneur who previously created SignLive (a remote BSL interpreting platform) and the Sign Dictionary (a free BSL learning resource used by millions).
SignAI uses pose estimation pipelines combined with transformer-based sequence models to analyse hand shape, facial expression, gaze, and body movement from video input. This temporal modelling converts sign language gestures into text or structured language representations in real time.
No. SignAI is explicitly designed to complement, not replace, human interpreters and Video Relay/Remote Interpreting (VRS/VRI) services. Its goal is to cover everyday communication moments that cannot always wait for a human interpreter, working alongside professional BSL services.
SignAI's origins date back to March 2021, when Joel Kellhofer began experimenting with motion capture techniques and sign language datasets. The SignAI website documents prototype videos from 2021 and 2022 showing early sign generation workflows and recognition experiments.
SignWow is the current Deaf-led interpreting, translation, and accessibility service operating alongside the SignAI research initiative. SignWow provides commercial insight and real-world operational context that informs the longer-term technological vision of SignAI.