Application security, Security Program Controls/Technologies, DevSecOps

How to deploy DAST to manage AI risks

AI tips

Generative artificial-intelligence models such as ChatGPT and GitHub Copilot can help write software code, but doing so presents new challenges to developers and application-security specialists.

Organizations must use AppSec testing methods like SCA, SAST and DAST to vet AI-generated code for errors, vulnerabilities and other hidden issues. It's also essential to have a DevSecOps culture in place that can quickly spot and remediate AI-created problems and provide the skilled human supervision necessary to use AI coding safely.

The potential pitfalls of AI code generation

We've gone over the problems posed by AI-created software at length, but here's a quick recap.

AIs create error-filled and insecure code. Forty percent of programs written by GitHub Copilot included at least one of MITRE's top 25 most common vulnerabilities in a 2022 New York University study. Stack Overflow has banned ChatGPT-generated code for being too buggy.

Invicti researcher Kadir Arslan found that Copilot made rookie mistakes, including leaving web pages open to SQL injection and using easily cracked hashing algorithms.

"You have to be very careful and treat Copilot suggestions only as a starting point," Arslan wrote in an October 2022 Invicti blog post.

AIs can be tricked into revealing secrets or performing unethical tasks. I've used specially worded instructions, or "prompts," to get around ChatGPT's internal restrictions and make the AI create a phishing email and write basic malware. More ambitious "prompt injections" fool the AI into revealing other users' queries, or embed code within prompts so that the code can be executed.

Prompt injection isn't always necessary. Microsoft's Bing AI chatbot glibly told reporters that its secret code name was "Sydney." Samsung staffers fed proprietary data into ChatGPT when seeking fresh solutions to technical problems, not realizing that everything the AI ingests becomes part of its training set. (ChatGPT parent company OpenAI now lets you switch off your query history to prevent this.)

AIs may recreate proprietary code or malware. A lot of undesirable data becomes part of the AI training set. That's fine in the long run, because the AI needs to learn to tell good from bad.

Yet we've already seen examples of GitHub Copilot reproducing GPL code, and copyrighted code is just as susceptible to being "recreated" by an AI. Copyrighted code in your software might expose you to litigation and licensing fees; GPL code might force your whole project to become open-source.

There's also the risk that AI-generated code may contain malware reproduced from its training set, either ingested accidentally or deliberately fed into the system by malicious actors.

"As the adoption [of AI coding] creeps up," said Invicti Chief Technology Officer and Head of Security Research Frank Catucci in a recent SC Magazine webinar, "the focus or the bullseye, the target, if you will, will be created on perhaps poisoning the well or poisoning the code that comes from these training datasets."

AIs "hallucinate" facts and can be exploited accordingly. Large-language-model AIs will make up facts and sources to make their replies sound more authoritative, a phenomenon known as "AI hallucination."

This sounds amusing, but AI hallucinations can have real-world consequences. Invicti researchers found that when given a coding task, ChatGPT reached out to online open-source code libraries that didn't exist.

To see if other instances of ChatGPT might also be calling out to these non-existent libraries, the Invicti researchers placed garbage code in a directory using the same name and online location as one of the fake libraries — and got several hits over the next few days.

That indicates that ChatGPT coding hallucinations can be reproduced. It also creates an opportunity for malefactors to poison legitimate projects by "squatting" on code libraries that AIs believe should exist.

"We had a library recommended that did not exist," said Catucci. "We were able to create one and find hits and traffic being directed to it, obviously with benign code, but it could have very well been malicious."

How SAST, DAST and SCA help mitigate AI coding threats

As with all coding bugs and vulnerabilities, the best way to catch errors made by AI-assisted code generation is to use automated tools that apply methods such as software composition analysis (SCA), static application security testing (SAST) and dynamic application security testing (DAST) during the software development life cycle (SDLC).

SCA should flag bits of code that might be someone else's intellectual property. SAST will examine the written code itself for vulnerabilities and other mistakes, although you will need a separate SAST tool for every coding language you use.

As soon as elements of the project can be executed as software, then DAST will monitor inputs and outputs for signs of security flaws. A hybrid approach, known as interactive application security testing (IAST) combines elements of SAST and DAST to examine code while it's running, probing the application from both inside and outside.

Organizations have a variety of automated scanning tools they can use to implement DevSecOps, with each providing a different array of functions. (Invicti.com)

"There are dangerous gaps in security coverage without DAST in place," says Patrick Vandenberg, director of product marketing at Invicti, in an April blog post. "You must have SAST, SCA, and DAST working together to improve coverage and find more vulnerabilities."

Modern DAST tools do more than just watch the "black box" of a running application. They can automatically analyze potential flaws and even test them to weed out false positives, a process that Invicti calls "proof-based scanning."

They also expand the overview of the potential attack surface, looking into web and cloud assets to maximize visibility, and can integrate with continuous integration/continuous delivery (CI/CD) tools and include compliance modules. Because modern DAST tools integrate aspects of SAST, they can be used to catch errors earlier in the SDLC than legacy DAST tools.

"There might be something that's developed in OpenAI that you're not going to find with SAST or SCA," Catucci said during the SC Magazine webinar. "There might be a vulnerability in there that's only present when you're essentially having this application spin up ... You would never know that with more static tests, whereas you would know that with the dynamic test."

Creating a DevSecOps culture

More essential than using the right tools, however, is creating the right culture. Just as an organization might merge the missions of software developers and IT operations staffers were merged to create DevOps, security practitioners need to be added to the mix to create DevSecOps.

Developers may be reluctant to work with security staffers who want to pick over code and (purportedly) slow down projects. That's why it helps to designate one member of each development team as a "security champion" who can liaise between the two groups — and ease developers into adopting security best practices.

"A security champion isn't someone who wins hacking contests (though that's certainly a plus) but one who champions the security message," wrote Invicti's Meaghan McBee in an August 2022 blog post. "They work daily to relay essential updates, surface and resolve common pain points, lean in on threat and vulnerability management, and provide more clarity on security needs to everyone from leadership down."

Automated tools that provide continuous testing and scanning will take the load off both developers and security personnel, minimizing friction between the teams and letting them focus on their jobs.

"When security tests are automated and run on every check-in, developers can find and fix issues much more efficiently," said Invicti Distinguished Architect Dan Murphy in a February 2022 blog post. "The goal is to treat the introduction of a critical security vulnerability just like a code change that causes unit tests to fail — something that is fixed quickly, without requiring the overhead of meetings and internal triage."

Last of all, but perhaps most importantly, you don't want to automate too much, especially if you're using AIs to help your developers code. Let humans supervise both the coding process and the testing process. You want to know what the AI is up to, not least of all because the AI itself may not know.

"AI-based code generators are likely to become a permanent part of the software world," wrote Catucci recently. "In terms of application security, though, this is yet another source of potentially vulnerable code that needs to pass rigorous security testing before being allowed into production."

Paul Wagenseil

Paul Wagenseil is custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.