A
 
SILMO Paris 2025
 
3d eyewear viewer & configurator

Industry News

13 Apr 2023

AI-equipped Eyewear Read Silent Speech

AI Equipped EyewearCornell's Smart Computer Interfaces for Future Interactions (SciFi) Lab has announced a breakthrough in silent-speech recognition with the development of EchoSpeech, a wearable eyeglasses interface that uses acoustic-sensing and (AI) artificial intelligence to continuously recognize unvocalized commands based on lip and mouth movements.

Led by doctoral student Ruidong Zhang, the team at SciFi Lab has created a cutting-edge technology that has the potential to revolutionize communication for individuals who cannot vocalize sound, such as patients with speech impairments. The wearable eyeglasses, called EchoSpeech, require just a few minutes of user training data before they can recognize up to 31 unvocalized commands with about 95% accuracy. The low-power, privacy-sensitive interface can be run on a smartphone and has a wide range of applications.

"We're very excited about this system because it really pushes the field forward on performance and privacy. It's small, low-power, and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world," said Cheng Zhang, assistant professor of information science at Cornell Ann S. Bowers College of Computing and Information Science and director of the SciFi Lab.

EchoSpeech has the potential to be used in various scenarios where speech is inconvenient or inappropriate, such as noisy restaurants or quiet libraries. It can also be paired with a stylus and used with design software, eliminating the need for a keyboard and a mouse. The wearable eyeglasses are outfitted with microphones and speakers smaller than pencil erasers, making them a wearable AI-powered sonar system that sends and receives soundwaves across the face and senses mouth movements.

"We're moving sonar onto the body," said Cheng Zhang, describing the innovation behind EchoSpeech.

The technology behind EchoSpeech builds on the SciFi Lab's previous work on acoustic-sensing devices, such as the wearable earbud called EarIO, which tracks facial movements. Unlike other silent-speech recognition technologies that require cameras or predetermined commands, EchoSpeech removes the need for wearable cameras and processes audio data locally on the user's smartphone, ensuring privacy and security. The data is relayed in real-time via Bluetooth, requiring less bandwidth to process compared to image or video data.

"We think glass will be an important personal computing platform to understand human activities in everyday settings," Cheng Zhang added, hinting at the team's future plans to explore smart-glass applications for tracking facial, eye, and upper body movements.

The team at SciFi Lab is actively exploring commercialization opportunities for EchoSpeech, thanks in part to Ignite: Cornell Research Lab to Market gap funding. With its potential to enable silent speech recognition and improve communication for individuals with speech impairments, as well as its wide range of applications in various settings, EchoSpeech is poised to make a significant impact on the field of human-computer interaction and wearable technologies.

3d eyewear viewer & configurator
 
custom omnichannel eyewear marketing
 
F