[ad_1]
Scientists from the State University of New York (USA) have developed the EarCommand system, which is able to read the movements of the user’s lips, even when he pronounces the words to himself. This allows you to interact with equipment even in a noisy environment, when a person’s voice can be interrupted by extraneous sounds, as well as give requests that you don’t want to speak out loud.
This works because even when we speak words silently, the muscles and bones of the head change position, which is also reflected in the deformity of the ear canal. If you learn to interpret these deformations, you can recognize which words cause them — which is what scientists have been doing.
From a hardware standpoint, EarCommand is similar to an in-ear earphone that uses an inward-facing speaker to transmit near-ultrasonic signals into the user’s ear canal. When these signals bounce off the inside of the channel, their echo is picked up by an inward-facing microphone.
These signals are analyzed by a computer, which, using a special algorithm, determines the deformation of the ear canal and finds the word that could cause it.
In the tests, users spoke 32 one-word commands and 25 sentence commands. The frequency of errors at the word level — 10.2%, at the sentence level — 12.3%. In the future, these values should improve as technology develops. What’s more, the technology works even when users were wearing masks or were in noisy environments. Plus, unlike some other silent voice control systems, EarCommand doesn’t use a camera.
This is not the only system with a similar principle of operation. There is also EarHealth, developed at the same university. She uses earphones that emit echoes to detect ear problems like earwax plugs, ruptured eardrums, and otitis media.
Read also 🧐
[ad_2]