Email security, Security Program Controls/Technologies, Vulnerability Management

How a layered security approach can prevent AI-based phishing 

Layered approach to AI security

When Google autocompletes a search query, when Amazon recommends a product based on shopping preferences, or when Tesla autopilot makes a navigation decision – that’s AI at work. Once the exclusive domain of software engineers, the advent of generative AI such as Bard, Dall-E, and ChatGPT, has become readily available to everyone, including, regrettably, fraudsters, scammers, hacktivists, and extortionists.

The use of AI in phishing

Scams and fraud riddle the internet today. In 2022 alone, U.S. businesses lost about $10 billion to criminals applying tactics such as phishing, wire fraud, business email compromise, and ransomware, designed to confuse, exploit, and dupe unsuspecting victims.

Here comes the scary part: Growing reports suggest that scammers are increasingly using AI to impersonate people, clone voices and launch highly-targeted phishing attacks. Recently in China, a hacker created a deepfake using AI, mimicked the victim’s friend then convinced him to transfer money over a video call. In another example, crypto exchange Binance discovered that scammers were using deepfakes to exploit their customer identification/verification (KYC) processes. Moreover, AI isn’t solely used for phishing or impersonations. AI now gets used to create malware, identify targets and vulnerabilities, spread disinformation and launch cyberattacks with high sophistication, speed and scale.

Why organizations need a multi-layered security strategy

As AI technologies mature, scams, disinformation, and cyberattacks are set to intensify. Why? Because human behavior is largely predictable it’s far easier to exploit human foibles than to exploit cybersecurity systems. Even if a hacker goes through the effort of building potent malware, they still need a path into the organization to deploy it, and that’s where phishing comes in.

How can organizations defend against the growing threat of AI? The answer lies in adopting a layered approach to security, one that goes beyond traditional cybersecurity controls to include the human aspect. The elements that comprise such a strategy include:

  • A human firewall: If employees are taught to develop a security instinct, they can serve as human firewalls, acting as a defense layer that can identify, block, and report malicious activities in its early stages. To achieve such an instinct, organizations must subject employees to regular phishing tests so they learn to recognize visual cues such as distortions in images and video, strange head and torso movements, syncing issues between video and audio, as well as situational cues: if a call suddenly appears out of the blue or if the subject makes an unusual request. Studies indicate users who spend more hours in security training show a higher degree of protection against both human and AI-generated emails in comparison to those that spend a lesser amount of hours.
  • AI-based security technology: Think of every new piece of equipment, new employee, device, software, and application as an opportunity for cybercriminals to compromise systems. It’s only a matter of time when adversaries leverage AI to advance the speed, scale, and success rate of cyberattacks and scams. Security teams can’t keep up with such a rapid pace. Organizations need to deploy advanced security technology that harnesses AI to inspect the content, context, and metadata of all emails, messages, and URLs. For example, security teams can use AI to detect phishing attacks that use visually identical URLs. AI can help analyze large amounts of security alerts or signals, reducing the number of false positives. Security pros can also program AI to perform incident response functions such as cutting off networks, isolating infected devices, notifying security teams, gathering evidence, and restoring data from backups.
  • Stronger authentication: Companies can prevent cybercriminals from hijacking identities and impersonating employees by implementing some type of authentication that neither an AI, or human adversary can social engineer. CISA recommends using phishing-resistant MFA, a type of authentication mechanism that stores security keys and credentials in FIDO2 authenticators and hardware instead of traditional one-time passwords and SMS authentication codes. Since phishing-resistant MFA removes the human from the equation, it helps reduce the risk of AI social engineering attacks to a great extent.
  • Policies and procedures around AI: When it comes to AI, it’s important to have clear and transparent advice for employees. If the organization uses AI, employees must understand what it does, why it’s being used, and steps taken to limit its malicious influence. Those employees using AI regularly should not input sensitive or confidential information. Samsung reportedly suffered a data leak because an employee shared proprietary code with ChatGPT. Clearly teach employees that if they encounter any instance of deepfake phishing, impersonation, or information manipulation, they must report such suspicious activities immediately to security teams.

As threat actors discover new ways to attack and compromise people using AI, no one can predict what the future of cybercrime will hold. It’s important that organizations recognize these imminent risks, take stock of their security defenses, leverage AI if needed, develop policies and procedures around AI, and raise security awareness around these issues for better preparedness as AI becomes more mainstream.

Stu Sjouwerman, founder and CEO, KnowBe4

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.