Application security

Four ways to stay ahead of the AI fraud curve

World Economic Forum Annual Meeting 2020 in Davos on deepfakes. Today’s columnist, Alex Romero of Constella Intelligence, warns that security teams need to watch for groups that use the technology to spread misinformation.

As organizations have adopted AI to minimize their attack surface and thwart fraud, cybercriminals also use AI to automate their attacks on a massive scale. The new virtual world driven by the COVID-19 pandemic has given bad actors the perfect opportunity to access consumer accounts by leveraging AI and bots to commit fraud like never before.

In today’s AI arms race, companies try to stay ahead of the attack curve, while criminals aim to overtake it, using it to their advantage. Here are four AI attack vectors all security pros should know about and ways to combat each of them:  

  • Identify and stop deepfakes.

Deepfakes superimpose existing video footage or photographs of a face onto a source head and body using advanced neural-network-powered AI. Deepfakes are relatively easy to create and often make fraudulent video and audio content appear incredibly real. Deepfakes have become increasingly harder to spot as criminals use more sophisticated techniques to trick their victims. In fact, Gartner predicts that deepfakes will account for 20 percent of successful account takeover attacks by 2023, which results in cybercriminals gaining access to user accounts and locking the legitimate user out.

Unfortunately, bad actors will weaponize deepfake technology for fraud as biometric-based authentication solutions are widely adopted. Even more of a concern, many digital identity verification products are unable to detect and prevent deepfakes, bots and sophisticated spoofing attacks. Organizations must make sure any identity verification product they implement has the sophistication in place to identify and stop deepfake attacks. 

  • Lock down machine learning systems.

As digital transformation accelerates amid the COVID-19 pandemic, fraudsters are leveraging machine learning (ML) to accelerate attacks on networks and systems, using AI to identify and exploit security gaps. While AI increasingly gets used to automate repetitive tasks, improve security and identify vulnerabilities, hackers will in turn build their own ML tools to target these processes. As cybercriminals take advantage of new technologies faster than security defenses can combat them, it’s critical for enterprises to secure ML systems and implement AI-powered solutions to recognize and halt attacks.

  • Secure and manage AI to prevent malfunctions.

Gartner reports that through 2022, 30 percent of all AI cyberattacks will leverage training-data poisoning, AI model theft or adversarial samples to attack AI-powered systems. These attacks manipulate an AI system to alter its behavior – which may result in widespread and damaging repercussions because AI has become a core component in critical systems across all industries. Cybercriminals have found new ways to pinpoint inherent limitations in the AI algorithms, such as changing how data gets classified and where it’s stored. These attacks on AI will ultimately make it challenging to trust the technology to perform its intended function. For example, AI attacks could hinder an autonomous vehicle’s ability to recognize hazards or prevent an AI-powered content filter from removing inappropriate images. Enterprises must implement standards for how AI applications are trained, secured and managed to avoid system hacks.

  • Deploy strong authentication to stop large-scale spearphishing attacks.

AI lets cybercriminals execute spearphishing attacks by finding personal information, determining user activity on social platforms and analyzing a victim’s tone of writing, such as how they communicate with colleagues and friends. Cybercriminals can then use this data to make their emails convincing. For example, automated targeted emails may sound like they came from a trusted colleague or relate to an event a user expressed interest in, making the victim likely to respond or click on a link which downloads malicious software that lets a criminal steal a victim’s username and password. In addition to educating users about phishing emails, organizations must secure their networks with strong authentication to ensure hackers can’t use stolen credentials to pose as a trusted user or bypass spam filters to reach user inboxes.

As enterprises escalate their AI strategy to succeed amid the continued COVID-19 pandemic, they must understand that fraudsters are also escalating their strategies to outsmart new AI technologies and commit cybercrime. By implementing strong authentication and securing AI systems effectively, enterprises can combat the growing threat of AI attacks, ultimately keeping customer accounts secure and AI systems executing their intended purpose.

Robert Prigge, chief executive officer, Jumio

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.