AI benefits/risks

AI coding tools make software more vulnerable, but there’s reason for hope

AI coding

While artificial intelligence (AI) has been around for quite some time, the technology has truly exploded in popularity and use cases over the past year. This of course has been largely because of the widespread adoption of ChatGPT, which made powerful AI technology available to the general public. Soon after, a variety of GPT-based coding tools were released, with the potential to multiply developer productivity.

As our world grows ever more digitally reliant and connected, the integrity and security of software becomes much more critical. In the face of growing cyber threats today, what are the implications of leveraging AI technology to assist developers in writing code? Research has already uncovered some interesting findings.

AI’s impact on secure coding

Stanford University recently published research entitled: "Do Users Write More Insecure Code with AI Assistants?" The research has delivered a few key takeaways that I’d like to dig into:

  • Developers who have been using AI assistants to code are producing code that’s less secure compared with those that don’t use AI assistants.
  • Developers using AI assistants to write code believe their code is more secure than if it were manually written.

Need for speed

On one hand, the use of AI assistants for coding has undoubtedly lightened the load for developers. Much like AI technology reduces manual work in every other industry, the technology helps developers write and ship code quickly. This development speed lets organizations upscale efficiency and increase developer productivity.

Over the years, technology and organizational design alike have shifted with increasing development speed in mind. Cloud native technology, DevOps methodology, and continuous integration/continuous delivery (CI/CD) pipelines have evolved as software development has modernized to build and deploy software faster. Now, AI helps developers write new code faster than ever before.

With speed comes risk

Adding AI to the mix to help reduce the workload of developers and improve development speed sounds great, but it doesn’t come without added security risk.  Security testing has become a critical part of software development, but it’s often overshadowed and deprioritized to stay on track with release cycles. This takes a toll.

According to recent ESG research, 45% of software gets released without going through security checks or tests and 32% of developers are skipping security processes altogether. The question now becomes: How will AI impact software security?

AI makes code less secure

Stanford University research has found that AI coding assistants are having the exact impact security professionals are worried about. Developers using AI assistants produce less secure code versus developers who don’t use AI assistants. Meanwhile, developers using AI-assistants tend to think they produce more secure code, leading to a false sense of security.

These findings are not too surprising. AI coding assistants are based on prompts and operate on algorithms with little contextual or project-specific understanding. Overall, the industry hopes these will improve over time. Either way, these findings highlight the critical need to make sure code gets properly tested before it’s shipped.

With the use of AI coding assistants, the software development landscape has evolved yet again. As AI written code becomes more common, and as malicious actors now leverage AI to identify vulnerabilities more efficiently, it amplifies the need to have scalable, powerful software testing tools.

When coding methods evolve, so too must testing methods. Modern software security methods should be highly automated and efficient at generating test cases. By enhancing existing testing methods with self-learning AI, we can create test cases automatically, using information about the system under test, to get better with each test run.

By leveraging self-learning AI during testing, we can reduce the manual workload, while creating intelligent test cases that humans would never have thought of. By integrating this form of testing into CI/CD, a scalable testing approach that can deal with the volume of AI-coding tools comes into view.

Despite the sobering findings of the Stanford study, this approach gives reason for hope: By leveraging AI both during testing and coding, it’s possible to reap the benefits of AI-coding assistants without making any concessions to security or efficiency.

Coding assistants are here to stay. Hopefully, they will improve over time. Either way, we must evolve and adapt our testing methods for the sake of security. Only then can we truly multiply development output in positive ways. Of course, the most important element in this equation will always remain the value of keeping humans deeply involved in the process.

Sergej Dechand, co-founder and CEO, Code Intelligence

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.