As technology improves and merges malware with artificial intelligence, timeless methods such as covering ones mouth may become more important than ever when discussing sensitive information to prevent leaks.
Researchers Andrew Senior, Oriol Vinyals, and Andrew Zisserman, of Google and researcher Joon Son Chung, of Oxford University developed a proof of concept attack which included custom lip reading algorithm that was able to decipher almost double the amount of words as a professional human lip reader, according to a recent report.
The researchers' 'Watch, Listen, Attend and Spell' (WLAS) neural network algorithm learned to transcribe videos of mouth motion to characters, using over 100,000 sentences from the videos, using thousands of hours of subtitled BBC television videos.
If threat actors where to weaponize this kind of technology, researchers suggest people may need to consider covering their mouths when speaking on sensitive topics around devices with cameras or at least mumbling their words.
“An obvious unlawful use of this technology is espionage since it makes it possible to “listen in” on a conversation from a distance,” Arctic Wolf CEO and Co-Founder Brian NeSmith told SC Media. “A nightmare scenario is a criminal stealing login and password information by listening in on a conversation from outside of a building through a window.”
NeSmith said this kind of technology makes it even more important for companies to consider not just digital security but physical security as well when assessing their overall security posture to ensure personnel dealing with sensitive information are working in environments that won't allow remote visual eavesdropping.
“Personnel will also need to be sure that they are not discussing sensitive information in public places,” NeSmith said. “This new technology makes it even more true that you just never know who is listening (or watching).”