Identity, AI/ML, AI benefits/risks

LastPass thwarts attempt to deceive employee with deepfake audio

Cyber attack deepfake attack. Vulnerability text in binary system

A LastPass employee was targeted by a deepfake audio call earlier this week that impersonated LastPass CEO Karim Toubba, but because it was done via WhatsApp and not a part of normal business communications, the employee did not fall for the scam.

On the day of the incident, Mike Kosak, senior principal intelligence analyst at LastPass posted a blog to educate the industry on how deepfake voice calls are on the rise and that security teams should train their staffs to be more aware of them.

“An employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO,” wrote Kosak. “Due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt — such as forced urgency — our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

Kosak added that there was zero impact on LastPass. However, the company wanted to share the incident to raise awareness that deepfakes are increasingly not only the purview of sophisticated nation-state threat actors and are increasingly being leveraged for executive impersonation fraud campaigns. 

A McAfee blog from last year pointed out that of 7,000 people surveyed, 1 in 4 said that they had experienced an AI voice cloning scam or knew someone who had.

“These attacks are an evolution of BECs,” said Nick France, chief technology officer at Sectigo. “However, rather than trying to get someone to click a fake link, they add personal and direct pressure outside of email, such as voice, SMS and video to get the employee to do something they shouldn't or wouldn't normally do.”

France said with deepfake technology, we can no longer trust what we see and hear remotely. Perfectly written phishing emails, audio messages with the correct tone, and now, as we see in this example with LastPass, even fully faked videos can get created easily and used to socially engineer into an organization.

“Even with all of the stories available today of people being scammed, employees may still believe that live audio or video cannot be faked and act on requests they are given seemingly by colleagues or leaders without question — as we have seen in this recent case,” said France.

Tactics rarely change, but the tools do, noted Morgan Wright, chief security advisor at SentinelOne and an SC Media columnist.

Wright said the industry needs to think of deepfake voice as just another tool, along with imagery and video. It’s a more sophisticated form of social engineering: the goal being to get a person to take an action they believe is trusted, which is otherwise not trusted.

“I don’t believe more tech is the answer,” said Wright. “We need to slow down and not respond in a knee-jerk fashion. We’re in a different threat environment than even a year ago. We need to verify first and establish trust second. Part of this involves getting employees to think critically about the event. Why would a CEO of a large company reach out, bypass chain-of-command, and go directly to them? It doesn’t make sense. And if it doesn’t make sense, trust your intuition. Sometimes low-tech can beat hi-tech.”

The proliferation of AI platforms has increased sophisticated phishing campaigns, posing a significant challenge to cybersecurity teams, added Krishna Vishnubhotla, vice president of product strategy at Zimperium. By leveraging AI-powered tools, threat actors can automate various stages of the phishing process, allowing them to craft highly convincing messages and engage with potential victims on a large scale in a shorter time frame.

“The danger of audio and video deepfakes will escalate when malware on our phones accesses our contact lists to send misleading SMS messages, making it appear as though they are from a ‘known' contact on your device,” said Vishnubhotla. “This will apply to messages and voicemails, too. When they get this right, all the visual cues on the phone will work against you to avoid any doubt. We already see malware accessing contacts from devices to spread further, so this is not that far-fetched if you think about it.”

As deepfake technology continues to improve, it’s likely that that we will start to see an increase in the number of deepfake attacks in the future, explained Damir Brescic, chief information security officer at Inversion6. Brescic said attackers can now use deepfake technology to bypass traditional security measures, such as two-factor authentication and knowledge-based authentication.

“Overall, employee awareness training is always a top-line-of-defense, especially with emerging threats like deepfake attacks,” said Brescic. “The ‘see something, say something’ aspect has saved more companies from major incidents, than technology alone.  The training should cover topics such as how to verify the identity of callers, how to spot deepfake audio, and how to report suspicious activity.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.