AI benefits/risks

Note to CISOs: How to stop ‘shadow’ AI

Embrace AI

As with any new innovation, expert and public opinions regarding generative AI have created both fear and optimism.

The main fears of generative AI innovation start with the potential for erroneous results, which could harm particular groups of people or ignite massive disinformation campaigns. Tech experts and scientists are also fearful of a time when AI can automate jobs and processes that have historically been human-fueled. Others fear that generative AI will lead to the demise of jobs in general – and eventually humans, too.

For the enterprise, C-suites across every industry are afraid that employees and stakeholders will feed their private data to public large language models (LLMs), exposing their most sensitive, private information for the entire world to see.

With the rise of tools such as ChatGPT, it’s quickly becoming second nature to “ask AI” to help accomplish goals. But LLMs are only as good as the data they train on, and today that’s usually publicly-available data. As a result, to get company-specific answers, users include company-specific – and potentially private – context in their questions. This use of private information leads to inadvertent leaks when the data gets sent to the public LLMs. Companies like Samsung have already experienced this, forcing the company to take precautionary steps and implement new security policies to control generative AI usage within the organization.

CISOs and CIOs must now figure out how to embrace emerging AI innovations and use their readily-available proprietary data, while also maintaining control and security over that data to make these tools as beneficial to their organizations as possible. How we answer this question could very well determine the future of the tech industry as a whole.

Saying ‘No’ to AI

In the competition between today’s leading tech innovators for generative AI dominance, will the cybersecurity champions stay ahead of those who might use generative AI to steal and abuse private data? That depends on an organization’s willingness to embrace it, as the decision to reject AI adoption forces users to find ways around those policies, leading to shadow AI.

When innovation hits the market in almost any industry, we know that people will go to great lengths to get their hands on it. In the early 2000s, many organizations were reluctant to adopt Wi-Fi for fear that it could undermine their security efforts. However, users wanted the convenience of wireless device usage and often deployed wireless access points without the IT department's knowledge or consent, putting the entire organization at risk.

The rise of shadow IT taught us that users will find a way to leverage new technology, with or without IT's approval. Thus, companies that deny their teams the opportunity to interact with AI – with proper controls – will similarly find their staff exploring covert avenues to do so, leaving ample room for threat actors to manipulate and compromise sensitive data in ways the enterprise can’t control or predict.

There’s even more value in giving employees access to generative AI innovations: democratizing cybersecurity, code development, and other deep-tech practices through knowledge sharing, elevating, and accelerating teams to do more; taking on menial tasks to free up humans for more productive, creative, and efficient work; and personalizing customer, employee, and end user experiences. But to harness the full power of both proprietary data and public LLMs, we must put strong principles in place to ensure generative AI tools are deployed as effectively and securely as possible.

How to stay optimistic

In the world of B2B tech, implementing AI within the organization begins with one important step: being meticulous in deciding which LLM models to employ, and ultimately choosing ones that offer the level of protection, accuracy, and control needed to meet business standards.

Businesses should decide on an LLM based upon how much the LLM itself and the provider prioritize the security and protection of customer data. It’s the first and most important consideration, as security and data privacy should be top of mind for stakeholders and every team across the organization. Being clear on an LLM’s security measures will ensure that the data sent to the LLM – whether as questions, prompts, or context - will not be shared with the public and used in future pre-training or fine-tuning of new models.

Organizations should also consider whether the data that’s powering the LLM is accurate, reliable, trustworthy, and unbiased. Since generative AI and LLMs hit the market, one of the biggest criticisms has been the potential bias and overall inaccuracy of the data they pull from. Understanding where the data comes from that powers the LLM will help teams assess how it can help meet the enterprise’s goals. If the outputs aren’t giving results that are contextually accurate or pull from untrustworthy sources, it can deliver answers that are inaccurate or misleading.

Once a company selects the right LLM for its use case and security requirements, it will need a solution for controlling who has access to the generative AI capabilities. This product should offer three key capabilities. Transparency ensures that all data sent to an LLM gets shown to the user, preventing inadvertent data loss. Exclusion and anonymization let admins control what data is sent outside the organization, even allowing them to anonymize important data. Auditing and compliance controls ensure that information sent to the LLM and the answers retrieved are governed in accordance with the company’s policies.

Ironically, to quell the fear that generative AI will replace the need for humans, we need humans to work collaboratively with and for AI technology, not against it. By embracing AI adoption within an organization – through a model that safeguards data; with data that’s reliable, trustworthy, and that can exist within a company’s own ecosystem; and with adequate controls over who can access the data used for AI – security teams can minimize the risks to the organization while empowering the workforce with all the advanced capabilities and deeper insights that generative AI offers.

AI will become ubiquitous in our daily lives. Think about cloud computing. Thanks to a continuous focus on cloud security, we no longer worry about where an application we use hosts information. In the same way, a continuous focus on AI data privacy and quality will help us become as comfortable with AI as we are with the cloud.

Mike Nichols, vice president, product management, security, Elastic

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.