Zero trust

Can AI rescue zero trust?

Will zero trust ever take flight? 

Years after the concept brought “trust no one, verify everything” into the mainstream, the vast majority of organizations still struggle to enforce it on their own turf.

A recent survey of IT security professionals by CyberRisk Alliance found that while 57% of organizations are receptive to bringing zero trust online, only 30% have partially or fully implemented it. Those numbers aren’t exactly the picture of progress. The obstacles that respondents cited — high costs, administrative complexities, lack of guidance — make it clear why zero-trust initiatives so often sputter rather than soar. 

Applications of AI in zero trust could mark a turning point, though. Here’s how. 

AI makes zero trust more dynamic, less cumbersome

Zero trust security operates on the basis of continuous verification and authentication. Trust is never extended by default, and every request for access must be vetted to ensure that the person or thing attempting access is who they say they are. There is no such thing as ‘once you’re in, you’re in.’ 

This policy of continuous verification lends itself well to uses of AI. The expectation is that AI could help shift security from being a fixed, static operation to one that is dynamic and adaptable based on context and continuous monitoring. For example, AI might be able to adjust user privileges from real-time risk assessments, automate incident response, and develop scripted actions that adjust over time as it learns from user activity and threat incidents.

Smart application of AI in the zero trust framework could help address a long-standing criticism of zero-trust initiatives, which is that layering on additional security controls can inadvertently frustrate authorized users just trying to get from point A to point B. By adapting security controls based on moment-by-moment context along with historical trends, AI could be trained to find a middle ground where ZT is enforced and—at the same time—eliminates impediments to authorized users.  

AI speeds up threat detection and aids threat intel

In 2023, we saw adversaries throw everything at the wall to see what would stick. Unfortunately, much of their tactics paid off. From crippling takedowns of healthcare networks, to supply chain attacks and mass phishing campaigns, to sophisticated use of AI-powered social engineering, it was a wake-up call to the industry that traditional defenses were insufficient to stave off this new generation of threats. 

In the CyberRisk Alliance survey on zero trust, respondents said they need more help going forward, and they see an opportunity for AI to step in where other tools have failed. Specifically, they’re most excited to see AI assist zero-trust efforts by identifying breach attempts faster, revealing patterns in user behavior and network activity, and foiling convincing phishing attempts. 

The takeaway is that AI combines incredible speed, precision, and depth of data to give organizations a contextually-rich understanding of the threats that zero trust practices aim to root out. In the next few years, we may see a marriage of generative AI tools with zero trust playbooks that, for the first time ever, bring this long-sought security philosophy within reach. 

Daniel Thomas

Daniel Thomas is a technology writer, researcher, and content producer for CyberRisk Alliance. He has over a decade of experience writing on the most critical topics of interest for the cybersecurity community, including cloud computing, artificial intelligence and machine learning, data analytics, threat hunting, automation, IAM, and digital security policies. He previously served as a senior editor for Defense News, and as the director of research for GovExec News in Washington, D.C.. 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.