Security Program Controls/Technologies

The new voluntary AI ‘commitments’ need to strike a balance between regulation and innovation

AI commitments

We learned late last week that the White House released new voluntary artificial intelligence (AI) guidelines with firm commitments from seven leading tech companies. As part of their commitment, the companies said that they would report appropriate and inappropriate use of AI technology. The recent announcement covered policies around bias and also demonstrated a strong push for cybersecurity.

Here are five points security pros need to know about the newly-released voluntary commitments agreed upon by the seven tech companies:

  • Publicly-reported AI-enabled cyberattacks have not increased at significant levels: There has been a lot of discussion about using AI technologies to assist in developing malware or helping adversaries conduct more sophisticated attacks. Despite this discourse, much public reporting of these type of cyberattacks has yet been reported. There has been some reporting of “deepfake” technologies used for misinformation and scamming purposes. These pose a more immediate threat than AI-based  malware or cyberattacks. With this in mind, any specific cybersecurity concerns behind these policies are likely speculative at this moment in time and are potentially focused on future readiness.
  • AI companies will have to strike a balance between regulation and innovation: Companies working within the AI market will need to ensure that they can meet the cybersecurity standards as outlined by the government in these recently released guidelines. On the other hand, the government must acknowledge that these guidelines may seem stifling to some organizations. Particularly those that are less likely to have the same cybersecurity controls as adhered to by banks or other highly regulated sectors. Companies need to prioritize these organizational considerations so that these guidelines do not have a negative impact on how development teams work.
  • Safeguarding AI development must become a top priority: The "testing" methodology referred to by the government in the release likely corresponds to the internal security of AI developers and the technology’s broader societal impact. There’s a lot of potential for privacy issues arising from using AI technologies, especially around large language models (LLMs) such as ChatGPT. OpenAI disclosed a vulnerability in ChatGPT on May 23, inadvertently providing access to the conversation titles of other users. This has profound data security implications for users of these LLMs. More generally, the government may ask AI companies to conduct a risk assessment from a societal impact perspective before releasing AI-enabled technologies.
  • The industry can’t really guarantee that AI technologies will only get used for defensive purposes: With LLMs, guardrails around the content of prompts appear to have been implemented since the early stages of development. This has already been clear to users who have asked public LLMs for information that attackers could use nefariously. However, much like any computing system, there’s always a way for hackers to circumvent these protections. The typical cat-and-mouse game of identifying and remediating vulnerabilities follows. Ensuring exclusive defensive use of such technologies remains impossible.
  • AI companies must foster public awareness around validating information sourced from the internet: LLMs deliver information in an authoritative manner that’s often incorrect, described as hallucinations. This results in users of these LLMs believing they have intimate knowledge of a subject area, even when they've been misled. Users of LLMs need to approach the results of their prompting with a hefty dose of skepticism and additional validation from an alternative source. Government guidance to users should emphasize this until the results are more reliable.

For the voluntary system to work, we need to create an incentive for companies to sign up. It may take the form of a certification proving adherence to the safeguards and cyber security measures outlined by the government. Further, we need a way for the public to know which companies adhere and are viewed as “trusted.” At the same time, any guidelines must stay proportionate to the risk and size of the business. For example, smaller companies will likely not have the same resources as larger ones, which stifles potential innovation and competition. Any system has to walk a tightrope: we don’t want companies with good intentions that participate to compromise on innovation.

That’s why last Friday’s meeting at the White House was such a good first step. There’s a lot of work ahead on the part of the government and the industry to iron out the details.

James Campbell, co-founder and CEO, Cado Security

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.