Threat Management, Threat Intelligence, Security Staff Acquisition & Development

Stop deepfakes with employee awareness training and better personal data management

In 2019, the chief executive of a UK energy company received a call from his German-based boss and did not hesitate to meet his request and quickly transfer €220,000 (about $243,000) to a Hungarian supplier. However, even though the executive recognized his boss's German accent and the pattern of his voice, the caller wasn’t actually his boss. Instead, it was a fraudster using AI voice technology.

By the time the victim realized he was conned (after the scammer called back asking for another transfer), it was already too late — the money had been moved from the Hungarian bank and transferred to other locations. While voice fraud isn’t new, this is the first reported example of an audio deepfake scam, and although some doubt the veracity of this story, it highlights the potentially massive threat that deepfakes pose to businesses.

The rise of deepfakes

Whether they come in the form of images, videos, audio, or text, the number of “deepfakes” —  synthetic media altered or created with the help of machine learning or artificial intelligence — has expanded at an alarming rate. According to Sensity, the number of deepfake videos online has nearly doubled every six months since 2018, and more than 85,000 deepfake videos have been detected as of December 2020. Considering that there’s been a significant rise in global searches for “deepfake” since the beginning of 2021, this number has likely grown even higher now.

Deepfakes, especially text, static image, and audio varieties, are becoming both more believable as well as cheaper and simpler to create. Recent research from FireEye shows that researchers often share pre-trained deep fake models on open source repositories like GitHub, which inadvertently lowers the barrier to entry for criminals looking to adapt the technology for nefarious purposes. Additionally, some marketing companies have started offering deepfakes as a service, with prices depending on the sophistication of the product required. If the asking price for a legitimate deep fake goes too high, threat actors can also turn to the dark web where deepfake videos are going for $50 and bad actors can purchase software to create deepfakes for as little as $25.

Personal info and deepfakes: a dangerous combination

The abundance of personal information available online -- including audio and video samples of business leaders -- has already made it easier for threat actors to carry out social engineering attacks. However, by combining this data with deepfakes, cybercriminals can theoretically create almost undetectable phishing attacks.

Weaponized deepfakes are not theoretical. In March 2021, the Federal Bureau of Investigation (FBI) warned that threat actors would more than likely use deepfake technology for spearphishing and social engineering crimes. As a result, the FBI forecasts the evolution of a newly-defined cyber-attack vector called Business Identity Compromise (BIC).” An extension of Business Email Compromise (BEC), BIC uses deepfake tools to develop fake corporate personas or sophisticated emulations of existing employees. For example, threat actors using so-called “readfakes” (AI-generated text) can successfully imitate a CEO’s writing style.

With more companies embracing hybrid or fully remote work and more employees who consequently are less familiar with their co-workers, this new threat vector has grown. Exploiting remote working environments, attackers could potentially even use deepfake technologies on real-time video calls to exfiltrate business information or trick victims into sharing login credentials.

Unfortunately, looking at how employees respond to present-day social engineering scams, it’s more a question of when rather than if employees will fall for these scams. Although the number of businesses who experienced “traditional” phishing attacks increased dramatically in 2020, it’s now clear that staff awareness has not improved. According to a recent Terranova Security report, almost 20% of employees still click on phishing links, even if they’ve already undergone security or phishing-related awareness training. Faced with more advanced scams, employees will find it even more difficult to tell what’s real from what’s fake.

Although researchers are currently working to produce tools capable of detecting deepfakes, the majority of current and proposed deepfake spotting technologies focus on spotting fake images and videos and tend to go out-of-date quickly because of the rapid pace at which deepfake technologies are advancing. However, even though technological countermeasures appear to lag behind the threat from deepfakes, businesses can still take a proactive approach to defense.

To mitigate the dangers of deepfake technologies, companies should act rapidly to educate all employees about the risks of deepfakes and how to spot them. Train employees how bad threat actors leverage the technology. Show them how easy a threat actor can emulate a trusted individual, and what to do in the event they detect a deepfake.

Additionally, if they don’t have them already, companies should also introduce strict verification procedures, especially for money and data transfers. It’s vital to make sure employees understand that management will never call them and ask them to release funds or give access to a critical business system. In case of an unverified call with an urgent request, employees should initiate authentication by calling an individual directly and asking them something only they would know.

However, for true protection, companies ultimately need to limit the amount of employee personal information available online. To craft almost undetectable deepfakes, threat actors can search publicly-accessible data brokers to acquire personal information, like an employee’s phone number, marital status, medical history, interests, and political affiliations. While removing personal information from data brokers takes time, there are plenty of subscription services that can do the hard work for employees.

As deepfake technologies improve, their effectiveness as a powerful tool for cybercriminals will grow. Cybersecurity experts have already ranked deepfakes as the most serious AI threat that businesses are likely to face in the near future. To defend against a threat that’s capable of bypassing even the most advanced cyber defenses, companies need to take a highly proactive approach to lock down their vulnerabilities.

While training employees will minimize the chances that an organization will fall for a deepfake attack, businesses need to act rapidly to cut off the supply of weaponized employee personal information available online. With AI capable of impersonating an employee’s identity, personal information has become a vulnerable organizational endpoint.

Rob Shavell, co-founder and CEO, Abine

Rob Shavell

Rob Shavell is a Co-founder and CEO of Abine / DeleteMe, The Online Privacy Company. Rob has been quoted as a privacy expert in the Wall Street Journal, New York Times, The Telegraph, NPR, ABC, NBC, and Fox News. Rob has also been a vocal proponent of privacy legislation reform, including as a public advocate of the California Privacy Rights Act (CPRA) and Abine is an early implementer of the new Global Privacy Control.

Rob brought Abine’s core products to market, including Blur, which has protected the privacy of over 10 Million consumers and DeleteMe, which has completed over 30 million opt-outs from data brokers.

Prior to Abine, Rob was VP Product at Identity Force, an identity theft provider and co-founder of one of the first consumer group travel portals, “TravelTogether.com” and was an associate at Softbank Capital Partners (Boston) and Softbank / Mobius Venture Capital (Silicon Valley). Rob has a BA from Cornell University where he began his studies in the school of Architecture.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.