Security Program Controls/Technologies

Mitigate the potential risks of generative AI with these five tips

Generative Ai

The speed of technology innovation continues to put defenders in a reactive mode. For example, ChatGPT, the popular generative artificial intelligence (AI) chatbot, reached 100 million users just two months after launching late last year.

While news reports warn about attackers “weaponizing” generative AI for disinformation campaigns and other cybercrime, most enterprises are mainly concerned that users will accidentally leak company secrets or other confidential data.

Generative AI technology lets users “generate” content on-the-fly. It has great productivity potential and it’s become seductive to today’s users. Yet, without adequate training, users can easily share confidential information that puts their company at risk.

Enterprises have two choices – either they can let employees use these tools – or block them. We know that blocking corporate access to these tools will not work and can contribute to shadow IT. They can get to them from their own device and networks. The other, more viable option: offer open access to the tools, but continuously have the ability to daily monitor, control, and enforce their use. 

Here’s what companies can do today

Most organizations are still early deploying AI. When putting together a plan, start by asking who will use these tools. For example, a security team might want to allow access to HR and marketing groups because they’re involved in content creation, but block engineers because of the potential data leak risks. Furthermore, it’s important to identify which tools company workers will use. Will they use ChatGPT, EinsteinGPT, or something else? What information can – and can’t – get shared with these tools?  Answering these questions and putting the proper policies and tools in place can provide the guardrails to safely use these tools in a given environment.

Here are five steps to help mitigate potential risks:

Align with stakeholders: Company stakeholders and line of business leaders need to communicate so everyone understands where and why their respective users will access these types of tools. They also need to agree what information should not get exposed to these tools. The team needs to build policies with contributions from these stakeholders, including how they intend to use the tools and understanding why it’s important to them.

Guide the users: Show the staff how to use these tools when they browse to them – using coaching pages – that the team can customize for content, such as: Read the Terms of Use before using such tools.

Create enforceable security policies: Create clear security policies for users. For example, don’t let users upload any source code to the tools. On the flipside, also do not let them download any code from these tools because that code can potentially have copyrights or belong to somebody else. If someone in the company uses it, the team can open itself up to libel.

Adopt a zero-trust framework: While generative AI can offer benefits, the team will want to prevent any sensitive data from leaving the network. Many organizations are already on their way to adopting a zero-trust framework. Using zero-trust guidelines and tools, such as data loss prevention (DLP), can play a critical role in enforcing the company’s generative AI policies. For example, write a DLP policy to capture everything the user sends to these tools.

Invest in training: User misuse of generative AI has become a primary concern for today’s enterprises. Just as corporate security awareness training has helped reduce the risk of phishing and other types of targeted cyberattacks, it’s important to train for the risks associated with generative AI.    

Despite the risks, the benefits of generative AI when used safely outweigh the potential downsides. Can the tool write code to exploit a vulnerability faster than a human could? Yes, it could. But here’s the question: who knew about the vulnerability first? The tool itself doesn’t have anything novel to offer. It’s not going to generate information that it’s not trained on. It can only do what humans tell it to do. It can’t think for itself – it’s not sentient. The human adversary still needs to do the research and find the vulnerable system and discover how to exploit it. ChatGPT or other AI-branded tools can’t do that yet.  

Generative AI has the potential to accelerate productivity across a wide range of work activities. By putting the proper security and governance guardrails in place, organizations can empower users to safely explore what’s possible.

Manoj Sharma, global head of security strategy, Symantec Enterprise Division, Broadcom

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.