
In a groundbreaking contribution to the always-evolving landscape of intelligence (AI), Google has announced SignGemma, its latest AI model designed to translate sign language into spoken text.
The model, part of Google's Gemma series, is currently in the testing phase and is expected to launch later this year. What makes it similar to other SignGemma AI models is that it will, too, be open-sourced to make it accessible to individuals and businesses.
The noteworthy development is that SignGemma comes very shortly following its showcase during the Google I/O 2025 keynote. It facilitates communication for people with speech and hearing impairments. Even those unfamiliar with sign language will also be among its beneficiaries.
Gus Martin, Gemma's product manager at DeepMind, said: “This AI model is capable of providing text translation from sign language in real-time, making face-to-face communication seamless.”
It's worth adding that the model is particularly efficient at the American Sign Language (ASL) in terms of English translations.
Google DeepMind underlined in a demonstration shared on X (formerly Twitter) that SignGemma can track hand movements and facial expressions with the help of a vision transformer. Moreover, as an open-source model, it can function offline, making it best for areas with little to no internet connectivity.