Identity, Security Program Controls/Technologies, Data Security

Guardrails on AI tools like ChatGPT needed to protect secrets, CISOs say

ChatGPT chatbot screen

LAS VEGAS — Walmart, Amazon and Microsoft have all reportedly issued warnings to employees about avoiding sharing corporate secrets or proprietary code when querying generative artificial intelligence tools such as ChatGPT — and a CISO panel held May 30 at CyberRisk Alliance’s Identiverse conference in Las Vegas suggests that many companies have been considering the same.

A large contingent of the audience — roughly half, by quick visual count — raised their hands when moderator Parham Eftekhari, executive vice president of collaboration at the CyberRisk Alliance, asked how many attendees’ organizations have introduced policies around AI usage.

In that same session, Ed Harris, chief information security officer (CISO) of Mauser Packaging, volunteered that his company has issued an edict similar to the ones Walmart and others have instituted: Do not plug in sensitive company information into external AI tools. Harris imagined a scenario in which an overzealous employee asks an AI tool for help sharpening the company’s marketing strategy, and in so doing enters corporate information that the AI remembers and later passes along to other users — perhaps even a competitor.

“I worry that someone’s going to [ask the AI engine]: ‘Hey, can you can you tell me what Mauser is looking at from a marketing perspective?’ and have AI just spit out our roadmap,” Harris explained. That’s why “we actually have a policy that it’s OK to go and ask AI questions to help start the creative juices flowing, but not to share any detailed plans.”

Bezawit Sumner, CISO of the nonprofit health-care tech support organization CRISP Shared Services, agreed that as AI further inserts itself into the daily routines of employees, it will be necessarily to establish usage parameters.

“We are going to have people who are going to use it because, A, they’re curious, or B … they think they have to because they have to outdo someone else,” said Sumner. Regardless of the reason, the key will be “ensuring that people are doing it the right way, and knowing that we can provide [them with] the guardrails of the do’s and don’ts of what that AI is capable of.”

Clear and simple rules needed for staff on AI guidelines

AI policies will differ based on individual companies’ needs and concerns. They might include a broad or precise definition of what constitutes sensitive information that should never be shared with an AI tool. Or perhaps they will include instructions on how to identify instances when the response from an AI tool seems malicious or anomalous, suggested Sumner.

But whatever rules or guidelines are used, “policies need to be clear and understandable,” stated Sean Zadig, vice president and CISO at Yahoo. “Use simple language.”

Zadig said it’s important that security leaders act expediently when developing these policies in order to keep up with the rapid increase in AI adoption and experimentation, and also to be viewed by employees as enablers and not an impediment to progress.

“Everybody in this room — all of your companies are probably running as fast as they can toward getting AI stuff integrated and pushed out,” said Zadig. “And you don’t want to be in the way saying ‘Stop,’ because [users are] just going to go around you, and then you’re going to lose the visibility that you need to help them make the right decisions.”

To that end, it’s also a good idea to seek input and feedback from employees such as engineers, developers and analysts, whose jobs will be affected by any AI usage policies. That way, “we’re not just telling people ‘This is what you have to do…’ but rather getting their buy-in early on,” Sumner explained.

From left: Identiverse moderator Parham Eftekhari of the CyberRisk Alliance, and CISO panelists Bezawit Sumner, of CRISP Shared Services; Ed Harris, of Mauser Packaging; and Yahoo's Sean Zadig. (Bradley Barth/CyberRisk Alliance)

Indeed, CISOs would be wise to remember that their own teams are likely to rely on AI as well to help combat future digital threats. Certainly, they don’t want their own policies curtailing such efforts against adversaries who in the meantime are abusing AI for their own illicit gain.

“We need to make sure the battle is at least symmetrical and that good AI efforts aren’t hamstrung, noted Andre Durand, CEO and founder of Ping Identity, who preceded the panel session with his own solo opening keynote. “The risks to do this right and protect IP are real for legitimate companies, but these same reservations and considerations don’t exist for the other side.”

Ultimately, whatever your AI usage policy looks like, it can only do so much. The end result is still contingent on your company being able to enforce these rules and, ideally, hire trustworthy employees who will faithfully follow them.

After all, said Harris, “if somebody in my company is malicious, and they go and they dump something in the AI, I don't know that I have a way to go and find out how much they put in.”

Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.