Generative AI

Four priorities to set for generative AI in the year ahead

Generative AI tips

Generative AI has evolved at a breakneck pace, leaving organizations to navigate its complex landscape without many guardrails. But recently released survey data from ExtraHop offers valuable insights into the security concerns and strategies of IT and security leaders that could help in the year ahead. 

We surveyed 1,250 IT and security leaders worldwide to understand their plans for securing and governing the use of generative AI tools inside their organizations. Technology and security leaders find themselves at a pivotal turning point as many still have a lot of catching up to do to ensure that their implementations are secure, and risks are mitigated effectively. 

Based on the data, here are four priorities security teams need to take around generative AI in the coming year:

  • Understand generative AI risks: It’s interesting that the survey found that security isn't the primary concern for IT and security leaders when it comes to generative AI. The data shows that 40% are more worried about receiving inaccurate or nonsensical responses than issues directly related to security. However, 36% cited exposure of customer and employee personal identifiable information (PII), another 33% said trade secrets were a concern and 25% said financial losses. Despite heavy adoption over the past year, it’s possible that IT and security leaders aren’t prioritizing security threats at the moment. As we see adoption and AI advancements continue in the near future, we expect to see this change and new products emerge to give the needed visibility of generative AI use across an enterprise. Pending resources, companies may consider building generative AI tools in-house to waive any privacy concerns of using a public tool.
  • Look beyond bans: OpenAI's ChatGPT was made available to the public in November 2022 and within four days of its release, it had already garnered more than 1 million users. It was not surprising to find that 73% of respondents reported that employees in their organization used generative AI tools or LLMs sometimes or frequently. It’s a given that we’ll see this number continue to rise. Nearly one-third of organizations opted for outright bans on generative AI tools. The fact that 5% of respondents report that their employees never use generative AI tools suggests that bans are not as successful as initially thought. However, the utilitarian potential for this emerging technology class is paramount and organizations will become more open to using generative AI as the tools become more useful over time. 
  • Rely on authoritative government and industry sources: The survey reveals that a majority (90%) of respondents desire government involvement in addressing generative AI security. Sixty-percent advocate for mandatory regulations, while 30% support government standards that businesses can adopt at their discretion. This calls for a modular framework that ensures the responsible use of generative AI going forward. The establishment of the National AI Advisory Committee (NAIAC) in the U.S. and the Biden administration’s landmark AI Executive Order are clear first steps in developing what a potential framework may look like, but we should look to companies such as OpenAI and other enterprises to develop guiding principles around AI. As we enter the coming year and adoption continues for different applications, more businesses will call for some form of guidance, whether it’s out of safety or caution.
  • Get basic cyber hygiene in check: The final takeaway highlights a disconcerting disparity between respondents' confidence in their ability to defend against AI threats and the actual state of their security practices. Nearly 82% express confidence in their organization's defense capabilities. Seventy-four percent are planning to invest in generative AI security measures in the near future, while less than 50% lack any technology for monitoring generative AI tool usage. Furthermore, only 46% report having policies in place governing acceptable use of generative AI tools, and a mere 42% claim to train users on safe utilization. 

These findings emphasize the existence of a security gap, leaving organizations in a vulnerable position because of their inability to monitor compliance with policies and ensure responsible AI use. Most organizations will incorporate generative AI into their processes, and like we’ve seen security education and hygiene improve in the last few years, we’ll also see the same happen here. 

While we need technology to monitor generative AI use in some capacity, it’s equally as important for employees to fully understand how these tools work, the risks, and how to avoid any damaging misuse. Organizations should also consider creating a cross-functional task force with representatives from IT, security, HR, legal, risk management, compliance, and other functions to explore use cases for the technology and be the training source.

Organizations must grapple with the dynamic nature of generative AI, the shortcomings of bans, the desire for government guidance, and the necessity for improved basic security measures. As generative AI continues to shape the future of technology, these takeaways offer essential insights for organizations seeking to navigate this challenging terrain and protect their interests in an increasingly AI-driven world.

Raja Mukerji, co-founder and chief scientist, ExtraHop

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.