RSAC, AI/ML, Governance, Risk and Compliance

RSAC 2024: How to use AI without getting in trouble

Behnam Dayanim speaks Monday during a presentation on artificial intelligence and the law at the RSA Conference in San Francisco. (Laura French / SC Media)

SAN FRANCISCO — In one of the first of a long line of AI-themed sessions hosted at the RSA Conference 2024, Behnam Dayanim, a partner and global head of digital commerce and gaming at Orrick Herrington & Sutcliffe LLP, presented “AI: Law, Policy, and Common Sense Suggestions to Stay Out of Trouble” Monday morning.

Dayanim’s discussed the rapid evolution of AI technology and how it has progressed faster than law and policy can reasonably catch up. And while Dayanim has given talks about the implications of AI in law and policy at RSAC in previous years, he said the conversation has changed significantly since the last time he presented the topic.

For more real-time RSAC coverage from SC Media please visit here.

At the same time, uncertainty around how to regulate AI — or even define it — continues to exist. So far, only a handful of states have created a “patchwork” of enforceable AI regulations in the United States, Dayanim said, with the most significant new AI rule being the European Union’s AI Act passed this year.

One of the challenges in regulating AI can be summarized by a quote from Ryan Calo of the University of Washington School of Law, which Dayanim includes in his presentation: “AI isn’t a thing, like a train, but rather a set of techniques aimed at approximating some aspect of cognition.”

The EU’s AI act, Dayanim said, is akin to “treating AI like a train" by explicitly categorizing different uses of “AI systems” as prohibited, high risk or low risk with specific requirements applied to each category.

More AI regulations — including those out of the U.S. federal government, which has introduced more than 100 AI regulations, but has yet to pass a comprehensive enforceable law — are expected in the coming years. But with only a “patchwork” currently available, how can organizations stay out of trouble in the midst of an AI revolution?

Getting ahead of AI policy by addressing risks

“Don’t wait for regulations,” said Dayanim, who recommends organizations prepare for, rather than react to, AI law. There are many aspects of AI risk that should be considered, as a wide range of regulatory categories intersect with AI including security, data and consumer protection, anti-discrimination and intellectual property rights.

Dayamin also noted that even companies that aren’t currently using or don’t currently plan to use AI should begin putting policies in place, because “your employees are using it whether you know it or not.”

When developing their own AI policy, organizations should consider a combination of existing law (EU AI Act, state laws etc.), their own organizational values and ethics and the approaches being taken by peers in their industry.

Policy should also cover every area of risk — cybersecurity and data confidentiality, bias and discrimination in hiring or employee evaluation, intellectual property rights and authorship for assets co-created by AI, and more — and involve cross-disciplinary/cross-departmental voice, as AI is now at the point where it may touch every part of an organization, from CEO, to CISO, to software developer, to secretary.

Awareness — through employee training — governance of AI tool use and development, and AI risk mitigation are three areas organizations should be considering. Establishing an AI Responsible Use Policy is one way orgs can get ahead of potential privacy, confidentiality and ethics concerns surrounding employees’ use of AI.

Use of AI by third-party vendors or contractors should also be factored into risk mitigation, as well as maintenance of intellectual property rights and patents when using AI to help develop code and other assets. Cybersecurity leaders should recognize and be prepared to respond to the ways AI can become the source of a data leak or a new entryway for attackers to breach their systems.

Organizational leaders will need to ask whether their current policies already cover some certain AI concerns and in which cases AI needs to be addressed specifically. Finally, flexibility should be built into AI policies so companies can smoothly adapt them to new capabilities, new risks and new government regulations, as the pace of change in the AI sphere is unlikely to slow down anytime soon.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.