Security Program Controls/Technologies

Security of voice verification ID systems put into question, again

Voice ID systems

How secure are voice verification systems that are widely used by financial institutions, healthcare and dozens of other consumer accounts? Can they be trusted when it comes to protecting sensitive data? Those questions were posed recently when one real-world test showed how artificial intelligence was used to synthesize a voice and bypass security protections and access sensitive bank information.

Called synthetic voice fraud, a Vice reporter Joseph Cox, was able to bypass UK-based Lloyds Bank’s Voice ID programs to access his own bank balance, a list of recent transactions and transfers. Cox used an AI-powered replica of his own voice to perpetrate the fraud.

The abuse of systems that allow customers to choose the option “My voice is my password” are rare. However, recent advances in the democratization of AI technology and affordability their applications is now putting this type of fraud into the hands of the masses. An example of synthetic voice services becoming more common comes from Samsung. This week it roll out a virtual mobile assistant based on an AI-generated copy of a user’s voice to answer calls via its Bixby mobile service.

Synthetic voice fraud

The Lloyds’ website describes that way “Voice ID” as “[Analyzing] over 100 different characteristics of your voice which like your fingerprint, are unique to you. Such as, how you use your mouth and vocal chords, your accent and how fast you talk. It even recognises you if you have a cold or a sore throat.”

In Cox’s report on his hack he spoke to Lloyds Bank’s representative that said:

“Voice ID is an optional security measure, however we are confident that it provides higher levels of security than traditional knowledge-based authentication methods, and that our layered approach to security and fraud prevention continues to provide the right level of protection for customers’ accounts, while still making them easy to access when needed.”

According to a separate 2020 report by Future Today Institute, “Synthesized media has a known problem area: It can be sued by malicious actors to mislead people, to trick voice authentication system and create forged audio recordings. Voice fraud cost U.S. business with call centers $14 billion last year alone, according to a study by call center software maker Pindrop.”

Cited in Cox’s report was use of voice authentication systems in use by TD Bank, Wells Fargo and Chase. To what extent those firms rely or allow voice to be used to access sensitive data is unclear.  

Cox posted a video of his test on Twitter:

In a tweet by François Chollet, an AI practitioner, commented on the Cox hack:

“Voice ID is likely even less secure than face ID at this point. In general biometrics are never a good idea”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.