Identity, AI/ML, Generative AI

Deepfakes will hurt 30% of organizations’ trust in biometrics by 2026

AI deepfakes will cause 30% of companies to lose trust in facial biometric authentication solutions by 2026, Gartner analysts predict.

Deepfakes — AI-generated replicas of a person’s likeness — could shatter confidence in face biometric authentication solutions for 30% of companies by 2026, Gartner analysts predict.

With AI imitations becoming more realistic and easier to generate, face-based identity verification and authentication systems will find it difficult to catch up their defenses, according to Akif Khan, VP analyst at Gartner. The technology research and consulting company announced its prediction on Feb. 1 ahead of the Gartner Security & Risk Management Summit in Dubai.

Currently, most face biometric solutions utilize presentation attack detection (PAD) to determine the “liveness” of a user attempting to authenticate with their face. Presentation attacks are when an attacker places an imitation such as a mask or video of the actual user in front of a camera or scanner. PAD is designed to distinguish between a live human face and these types of imitations.

However, attackers are increasingly turning to higher-complexity digital injection attacks using deepfakes, in which the attacker bypasses a physical camera by inputting imagery directly into the system’s data stream via tools such as virtual cameras, according to Mitek Systems.

Injection attacks increased by 200% in 2023, although presentation attacks were still more common, according to Gartner research.

“Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” Khan said in a statement.

Deepfake fraud threat accelerating, data suggests

The use of deepfakes for fraud and biometric authentication bypass has been a concern for years, especially in the financial sector. For example, in 2021, tax fraudsters in China were able to purchase high-definition photographs of faces online to create deepfakes that fooled China’s government-run facial recognition technology, ultimately enabling them to steal the equivalent of $75 million USD via fake tax invoices.

Recent data suggests the security threat posed by AI-generated deepfakes is growing, with research by Onfido revealing a 3,000% increase in deepfake fraud attempts in 2023. Companies are also increasingly using biometric authentication methods, with a GetApp survey finding 79% of companies used these methods in 2022, compared with only 27% in 2019.

Companies using facial recognition for authentication and identity verification should ensure their solutions are equipped to keep up with the advancement and increased availability of deepfake generation tools, according to Gartner.

“Organizations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection,” Khan concluded.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.