Pointers at Glance
- Cornell University researchers have developed a low-power, wearable AI-powered sonar system called EchoSpeech that uses acoustic sensing and AI to continuously recognize up to 31 unvocalized commands based on lip and mouth movements.
- EchoSpeech glasses have become a wearable AI-powered sonar system, eliminating the need for a keyboard and mouse. They can be used for communication via smartphones in noisy environments or design software like CAD.
Researchers from Cornell University have developed a wearable interface called EchoSpeech, which uses acoustic sensing and AI to recognize silent speech commands. The device can recognize up to 31 unvocalized commands based on lip and mouth movements.
What Is EchoSpeech and How Does It Work?
EchoSpeech is a useful communication tool for those who cannot vocalize sound and a practical solution for those in noisy environments. The wearable interface can be paired with a stylus and used with design software like CAD, reducing the need for a keyboard and mouse.
EchoSpeech requires only a few minutes of user training data before it can recognize commands and can be run on a smartphone. The device is fitted with a pair of microphones and speakers smaller than pencil erasers, which become a wearable AI-powered sonar system that sends and receives soundwaves across the face and senses mouth movements. A deep learning algorithm then analyzes these echo profiles in real time with around 95% accuracy.
The researchers believe EchoSpeech could be used in many different scenarios, from enabling people who cannot vocalize sound to communicate to working as a communication tool in environments where vocalization is difficult, such as a noisy restaurant or quiet library.
In its present form:
- The device could be used to communicate with others via a smartphone
- The silent speech interface can also be used with design software
EchoSpeech Offers Practical and Privacy-sensitive Solution For Silent-speech Recognition
Cheng Zhang, Assistant Professor of Information Science and Director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab said the team was excited about the system because it pushed the field forward on performance and privacy.
The device is small, low-power, and privacy-sensitive, all essential features for deploying new, wearable technologies in the real world. Acoustic-sensing technology removes the need for wearable video cameras, making the technology more privacy-sensitive.
According to François Guimbretière, a professor in information science, audio data is much smaller than image or video data, requiring less bandwidth to process, which can be relayed to a smartphone via Bluetooth in real-time. He added that privacy-sensitive information never leaves the user’s control because the data is processed locally on the user’s smartphone instead of being uploaded to the cloud.
The researchers believe that EchoSpeech has great potential for further development, with Ruidong Zhang, doctoral student of information science and lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” suggesting that the silent speech technology could be an excellent input for a voice synthesizer, giving patients their voices back.