Security Program Controls/Technologies

Why we must hit ‘pause’ on generative AI experiments

AI Pause

The non-profit Future of Life Institute recently published an open letter calling for a six-month pause to study the effects of generative AI and how we can innovate more responsibly. This comes as AI technologies such as ChatGPT and Dall-e have provoked global fascination and fear regarding the capabilities of AI on its current trajectory of development. Thousands of tech experts and leaders signed the open letter. I have joined them.

Generative AI will disrupt the world in profound ways. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, and audio. It poses a great potential societal and economic impact as it may cause hundreds of millions of people to lose their jobs. This fear was shared by OpenAI CEO Sam Altman on a recent Lex Fridman podcast, where Altman talked about job losses among programmers and customer service people. We can compare the generative AI revolution to what happened to blue-collar jobs during the Industrial Revolution. Except this time, white-collar professions are more at risk.

Many professions are potentially impacted: art, teaching, journalism, legal, real estate, and software development. In its recent GPT-4 technical report, OpenAI reported that GPT-4 demonstrates human-level performance on a large panel of academic and professional benchmarks: Uniform Bar Exam, SAT Math, AP Art History, and AP Biology. It’s even more impressive when considering that GPT-4 has not been fine-tuned on these specific tasks.

In journalism, the effects of generative AI are underway as we see job cuts at major newspapers and digital outlets. It’s not all bad though, since the technology could foster more investigative journalism and the production of original content—as mentioned recently by Mathias Döpfner, the CEO of the Axel Springer media group. In the software industry—like the cybersecurity industry—AI will help build software and products much more quickly—think GitHub Copilot. This may put some tech workers at risk as less people are required to do the job.

The potential societal impact goes far beyond jobs. Generative AI presents big risks in terms of cybersecurity. In a recent, and very interesting, technical report, Microsoft Research ran a series of experiments documenting the capabilities of GPT-4. In one experiment, researchers instructed an early version of GPT-4 to plan and execute a cyberattack that consisted of hacking a computer in a local network. It presents both a fascinating and scary use case for the technology.

As shown in the report, the main cybersecurity risk will happen when we connect generative AI to tools, so that the AI can interact with the world, learn from the feedback, and adapt. Linux systems are just one example of a tool, which itself includes many tools to interact with the world, such as network commands. There’s also a compiler toolchain, which lets the AI write its own programs. As generative AI typically excels in Linux and programming, from there the possibilities are theoretically infinite.

In terms of cybersecurity, the current AI safety protocols are not sufficient, and the protocols in themselves don’t provide perfect protection. OpenAI has reported that for the development of GPT-4 they have worked with 50 experts from domains such as cybersecurity, biorisk, and international security. These experts test the AI in an adversarial manner and ensure that it does not generate harmful content. However, it’s not 100% bullet proof; it’s indeed quite easy to generate harmful content with ChatGPT. Besides the risk to cybersecurity, and in the light of the recent pandemic, we have to consider biorisk as a major concern.

That’s why we need to take time to understand the technology. AI research and development should focus on AI safety, alignment, and interpretability. It’s also important that the AI community builds robust watermarking systems. AI will impact many industries, and it’s difficult to predict if it will have a negative or positive long-term effect, especially from a social and economic point of view. That’s a key purpose of the open letter. To give us more runway so that we can make the right decisions. Choices that will hopefully protect our societal and economic well-being.

Sébastien Goutal, chief science officer, Vade

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.