Security Program Controls/Technologies, Application security, DevSecOps

How AI and DAST can mitigate security risks

Artificial intelligence is suddenly scary now that the mind-blowing abilities of ChatGPT and similar AI programs have been revealed to the world. It's clear that the availability of cheap AI will make it easier for criminals and other miscreants to launch successful cyberattacks.

But there's a silver lining to this. AI will make software development more secure. It will be a boon to dynamic application security testing (DAST) and other forms of security testing. The combination of AI and DAST will help developers test and validate open-source software as well as their own code. It will be able to further automate testing tasks and sift through data and false positives faster than any human.

Why AI is scary for cybersecurity

As SC Magazine's Derek B. Johnson reported in December 2022, many cybersecurity practitioners didn't take AI seriously until the OpenAI company made its ChatGPT interface available to the public at the end of November.

It quickly became evident that ChatGPT and related programs like OpenAI's Codex and GPT-3 could write very good phishing emails, craft convincing social-media campaigns and fake-news stories, write basic malicious code on behalf of unskilled attackers and even rapidly generate polymorphic malware.

Casey John Ellis, chief technology officer, founder and chair of BugCrowd, told Johnson that the demonstrated malicious abilities of ChatGPT created an "oh sh*t" moment for cybersecurity researchers.

"It's frankly influenced the way that I've been thinking about the role of machine learning and AI in innovations," Ellis said. "Technology disrupts things, that's its job. I think unintended consequences are a part of that disruption."

A December 2022 research paper from Traficom, the Finnish government's transportation and communications agency, forecast widespread malicious use of AI in the near future, beginning with nation-state threat actors and trickling down through cybercrime groups to semi-skilled individual attackers.

"We predict that AI-enabled attacks will become more widespread among less skilled attackers in the next five years," the paper said. "As conventional cyberattacks will become obsolete, AI technologies, skills and tools will become more available and affordable, incentivizing attackers to make use of AI-enabled cyberattacks."

The solution, Traficom argued, would be for defenders to incorporate AI into their own tools, even though that would trigger an AI arms race.

"AI will enable completely new attack techniques which will be more challenging to cope with, and which will require the creation of new security solutions [and] completely new defense techniques," the paper said. "New security solutions will have to leverage AI advances before attackers do."

How AI can help application-security testers

Among those defense techniques will be the use of AI to check code and applications for vulnerabilities. Properly trained AIs can become a huge asset for application-security testers and bug hunters.

For example, modern applications draw an enormous amount of code from open-source software libraries that may be poorly maintained or imperfectly analyzed. The Log4shell flaw in the open-source Log4j Java utility, one of the worst software vulnerabilities ever discovered, lay dormant for eight long years until it was revealed in December 2021.

Nearly as bad were the Heartbleed and Shellshock open-source flaws in 2014. Similarly serious vulnerabilities may be found and fixed more quickly once AI bots start regularly scanning open-source libraries.

Even ChatGPT, which is not as thorough as AI interfaces designed to work with code, can effectively check code for errors and vulnerabilities. Optimized programs like OpenAI's Codex should do an even better job. (Just don't count on AI to write perfect code: A New York University study found that 40% of AI-generated code contained vulnerabilities, a finding that was replicated by Invicti's Kadir Arslan.)

The obvious next step is to add AI to DAST and other forms of application-security testing. Various levels of automation are already a big part of DAST, as Invicti's Meaghan McBee explained in a January 2023 blog post.

"Organizations can save time and resources by automating previously manual processes for initiating security tests," McBee wrote. "This allows security teams [to] focus on higher-value tasks, such as analyzing result trends, investigating more advanced vulnerabilities, and implementing measures to prevent the introduction of new vulnerabilities down the road."

It's not hard to see how a properly trained AI will be able to scan static code for flaws, as well as run applications and analyze their behavior and outputs. If an AI is allowed to learn from its own experiences, its performance should only get better over time.

"When security tests are automated, such as with static analysis and software composition analysis being run on every check-in, developers can find and fix issues much more efficiently," said Dan Murphy, a distinguished architect at Invicti, in the same blog post.

Use of AI, DAST and deep learning will especially help with sorting out false positives, argued the authors of a paper entitled "Optimising Vulnerability Triage in DAST with Deep Learning" that was presented at the 15th ACM (Association for Computing Machinery) Workshop on Artificial Intelligence and Security in November 2022.

"Given the amount of time and cognitive effort required to constantly manually review high volumes of DAST results correctly, the addition of this deep learning capability to a rules-based scanner creates a hybrid system that enables expert analysts to rank scan results, deprioritize false positives and concentrate on likely real vulnerabilities," the paper's abstract stated. "This improves productivity and reduces remediation time, resulting in stronger security postures."

Don't leave out the human element

The paper emphasized that AI should not replace humans in application-security testing but work alongside them to achieve maximum results — a point that Invicti's McBee also stressed.

"Automation in security isn't about replacing humans entirely; it's there to make testing and detection easier and faster at the most critical decision points," she wrote in a November 2022 blog post.

"Even if it works at peak efficiency (and that’s a big if)," McBee added, "technology simply cannot replace experts in DevSecOps teams when it comes to making vital decisions and taking action. You need people with the know-how and necessary skills to make calls about serious vulnerabilities, breach attempts, and potential exploits."

Incorporating AI into DAST and other methods of application-security testing is not a question of if, but when, as indicated by trends in the overall IT security market. An IBM study found that use of AI and automation in IT security rose by 18.6% between 2020 and 2022, with 70% of organizations surveyed using AI and automation by the end of that period.

Furthermore, organizations that characterized their AI/automation tools as "fully deployed" said that their average breach-remediation costs were less than half of those incurred by organizations without security AI or automation, and that their data-breach lifecycle was 74 days shorter.

"Automation is no longer a nice-to-have but an essential part of your overall security mix, speeding up and scaling security testing to the level of modern development," McBee wrote in November 2022. "Organizations and entire nations alike can no longer afford to neglect the pressing need to marry automated technology with human experience."

Paul Wagenseil

Paul Wagenseil is custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.