Security Program Controls/Technologies

We need to refine and secure AI, not turn our backs on the technology 

Perils of AI

As Baldur Bjarnason's new book eloquently explains, the concept of "poisoning" AI models brings to light the rising challenges in the realm of AI ethics and security. This complex conversation has become both daunting and intriguing, stirring the murky waters of technological evolution and its subsequent ethical conundrums.

In a broader sense, the prospect of ChatGPT or any AI being compromised or "broken" by someone may sound like a cyber thriller's plot. However, it’s essential to treat this not as a hypothetical Armageddon, but as a powerful call to action to refine and secure an evolving technology.

I’d like to think of Tuesday’s call by OpenAI and others that “mitigating the risk of extinction from AI should be a global priority” as a sign that the industry seeks to work with all stakeholders to make AI’s promise a reality – and that they recognize that we need to look at AI in the same light as we do nuclear wars and pandemics.

The crux of these conversations revolve around a core understanding: AI models, much like any software, are susceptible to exploitation. The current predicament brings to mind the early days of the internet when cyber vulnerabilities were rampant and security strategies were in their infancy. Over the years, the tech community has built robust cybersecurity frameworks to combat threats to traditional software, and there's no reason we can't accomplish the same for AI.

Cybersecurity and AI, while distinct, share many parallels. Both realms face similar challenges, from malicious actors aiming to exploit vulnerabilities to the need to maintain privacy and data integrity. They both necessitate a multifaceted approach to security, demanding both technological solutions and ethical and legislative ones.

OpenAI's inclination towards secrecy mirrors similar tendencies in the early days of cybersecurity. In the past, organizations often shrouded their security practices, vulnerabilities, and breaches in secrecy, fearing competitive disadvantage and reputation damage. However, this strategy often proved counterproductive. Cybersecurity thrives on transparency, shared threat intelligence, and community-wide collaboration. Similarly, OpenAI and other AI institutions must embrace openness and proactively collaborate with the broader research community.

Opening up AI models to the community of security researchers compares to inviting ethical hackers to test the fortitude of cyber defenses. Many organizations, even the most fortified ones, have significantly benefited from the discoveries of "white hat" hackers. More eyes, perspectives, and potential solutions – this collective wisdom can be a powerful catalyst for a more secure AI.

Bjarnason's analogy comparing AI models to the "market for lemons" highlights an underappreciated aspect of the AI industry – transparency in training processes and data sources. Much like how we expect transparency from car manufacturers about their components and production lines, we should demand similar openness from AI developers. We need standards akin to food labeling, where AI models come with “ingredient lists” – documentation of data sources, training methodologies, and the parameters that guide their learning.

However, it's crucial to remember that, unlike used cars, AI isn't a static product. It's an entity that evolves and learns over time. Like the evolving threat landscape in cybersecurity, where new vulnerabilities and attack vectors appear continuously, AI's potential defects are not static. They may emerge or evolve based on the data it interacts with and the environments it navigates.

The threat of AI models being poisoned is a reality, much like traditional software or systems being infected with malware or targeted by ransomware. We must tackle these problems head-on with robust countermeasures, such as refined training data sanitization practices, rigorous fine-tuning procedures, and increased transparency in AI operations. As cybersecurity evolves to thwart new threats, so must our approach to AI security.

In essence, AI models are merely tools. They are as beneficial or harmful as humans allow. We, the creators and users of AI, must wield these tools responsibly and ethically.

Regulatory bodies, too, have a significant role to play here. As robust cybersecurity regulations have been instrumental in shaping safer digital environments, intelligent and effective AI regulations are crucial. They should not stifle innovation, but foster a climate of transparency, fairness, and security. The European Union and the United States should spearhead a coordinated effort to create a harmonized AI governance framework akin to their strides in cybersecurity cooperation.

It's an extreme view that we should discard OpenAI's products as inherently defective and scout for alternatives. Like any technology in its nascent stage, AI has been grappling with its share of challenges. However, the solution isn't to discard AI altogether. Instead, we should channel our efforts toward enhancing transparency, refining practices, and engineering ethically sound AI systems.

AI's potential significantly outweighs its teething problems. We must approach it with critical optimism, not fear. This crucial juncture in the AI industry should rally all stakeholders – AI companies, developers, end-users, and regulators - to collaboratively build a safer, more transparent, and accountable AI ecosystem. As we've seen with cybersecurity, it's an arduous journey, but the destination – a world made better through AI – is well worth the effort.

Ani Chaudhuri, chief executive officer, Dasera

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.