Security Program Controls/Technologies

AI has gone mainstream – so let’s innovate and regulate

Artificial Intelligence

About two months ago, some heavyweights in the tech industry, such as Elon Musk and Steve Wozniak, published an open letter calling for a pause on AI research and development. The letter was signed by more than 1,000 tech leaders and researchers.

Despite the calls for a pause, the letter clearly has not stopped AI’s progression. In fact, it hasn’t even stopped some of its signatories, as Elon Musk has announced the development of TruthGPT, his own ChatGPT rival. Yet with many researchers, tech experts, and consumers concerned, the question remains how can we ensure that generative AI does not pose a risk to society, humanity, and security?

Can we actually pause the development of generative AI? The short answer is no. Recently, ChatGPT’s creator, Sam Altman, testified before Congress in May to discuss just that, calling for widespread regulation. AI has been around for many years and already powers a lot of what we do. Until ChatGPT was released late last year, no one really understood the transformative power of generative AI. Now that companies and individuals are beginning to understand its potential and barriers to accessibility have been removed, we’ve seen AI adoption and research speed up as well as increased talks around governing its use. Analyzing data and offering rule-based responses were the training wheels of the AI industry and now that we have advanced, there’s no way to go back on that progress.

Furthermore, there are consequences to attempting to pause AI progress, especially from a security perspective. As overseas companies continue generative AI development, U.S. users would eventually begin to use foreign versions as they grow in superiority. In turn, this could put potentially sensitive information in the hands of foreign companies with more lenient security regulations, opening up additional risks in our country around privacy and identity.

What are the risks of AI?

To truly understand why some experts are calling for a pause on development, it’s important to first understand the risks that generative AI presents. From a cyber perspective, we’re already seeing hackers use generative AI tools to create more realistic phishing attacks mimicking brands and tone or more easily translating copy into several languages — making them more difficult to identify and easily connecting hackers with global audiences.

Attackers can also use AI technology to create deepfakes. The ability to tell the difference between real and artificial — from information to videos to music and beyond — will continue to blur the lines for consumers. From a macro view, this technology has the ability to further erode trust on the internet and contribute to the spread of misinformation. Internet users will constantly have to ask themselves how “real” is the image, video, or information they find. Recently, an AI-generated image of the Pope in a white puffer coat and bejeweled crucifix went viral, tricking thousands. While it’s a fairly benign example, attackers can also use the technology to create false images or video or audio clips that ruin the reputations of high-profile individuals.

What benefits does AI offer?

While there are risks, generative AI has game-changer potential. Personal tasks have already started to move down to the AI world as well, with some using generative AI to create meal plans, travel itineraries, and business schedules, letting users focus on other higher-priority tasks.

Generative AI can also act as a low-code or no-code tool. Organizations can use it to build completely new products that are human-described instead of coder-developed. This dramatically decreases the cost and technical barriers to entry for many products. Some would argue that this lets threat actors quickly generate malware. However, the functionality of generative AI will make applications more secure and harder for threat actors to breach. We need to leverage AI to fight AI.

Generative AI can analyze code for security vulnerabilities more quickly and efficiently — freeing up security experts for less tedious and time-consuming tasks. We’re already seeing technology companies, such as Microsoft, implementing AI assistants into security applications to aid product, development, and security teams with detecting, identifying, and prioritizing threats. Ideally, this technology would create fewer vulnerabilities, not more, because AI will catch and fix issues before threat actors exploit them. And because generative AI learns more as humans feed it data, security teams can implement it into their development process to create code that is increasingly secure.

We are on the cutting edge of this technology and we are still not fully aware of what it can do. Like most technology, whether we consider AI a benefit or a detriment to society depends on by how it gets used.

We can’t take generative AI back. Threat actors and cybercriminals won’t stop using AI because there’s a ban. We must continue to develop the technology and use it for positive purposes. While AI presents certain challenges to society in terms of potential job losses that could cause some disruptions, it’s possible to coordinate these technical advances with sensible oversight: we’ve done it many times before.

Will LaSala, Field CTO, OneSpan

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.