AI benefits/risks

Three proactive ways to prepare for the coming regulatory climate around AI

Vice President Kamala Harris looks on as President Biden signs a new executive order on artificial intelligence at the White House on October 30. Today’s columnist, Rick Holland of Reliaquest, offers three proactive steps security teams can take to prepare for coming government mandates around AI. (Photo by Chip Somodevilla/Getty Images)

When the Biden administration on Oct. 30 released its long-awaited executive order (EO) on artificial intelligence (AI), the U.S. sought to set the agenda on AI and made its announcement ahead of the United Kingdom hosting the AI Safety Summit at historic Bletchley Park.

President Biden's plan focuses on eight important areas ranging from safety, security, innovation, and privacy to AI rights for consumers, patients, students, and workers. While the EO represents a helpful, ambitious first step, it can’t really move forward without congressional funding.

The Executive Branch only has so much power. Given the existing political landscape, this could pose a challenge. Also, as we saw with Biden's May 2021 EO on Improving the Nation's Cybersecurity, this AI EO lays out the blueprint and timeline for actions. As we approach the many deadlines, we can better assess the guidance and activities taken.

Instead of offering a detailed analysis of the pros and cons of this EO, I decided to offer three suggestions for incorporating components of this EO into a blueprint for a company to securely adopt AI and prepare for potential AI regulation. Here are my recommendations:

Establish an AI risk assessment and governance model

At a minimum, treat AI like any new emerging technology, although it’s safe to say that AI will disrupt the market in unpredictable ways that we have not seen for many decades. Companies should work on AI risk assessments. How will they use AI? What are the risks? Companies need to decide if they will block outright, then specifically “enable” or openly permit access to AI applications. Companies also need to incorporate AI into their existing governance models – and do so quickly as the risk of “shadow AI” looms. The EO establishes an AI Council to formulate, develop, communicate, engage, and implement AI policies. Does the organization have this function? An AI Council could offer an excellent cross-company subcommittee. The EO also requires the establishment of Chief AI Officers (CAIO) and includes high-level job responsibilities. The CAIO will promote AI innovation in their organization, managing risks, and carrying out several other tasks. Many organizations won't have a CAIO. However, look at the responsibilities in the EO and evaluate which ones might makes sense for the company to adopt. These responsibilities could roll up to a CTO or similar role when the CAIO is optional. The duties of an AI Council and CAIO could play pivotal roles in AI governance.

Stay in close contact with NIST, and leverage the agency’s expertise

Over the coming year, many documents will be produced that companies can incorporate into their AI adoption blueprint. "Within x days" gets heavily featured in the EO. As expected, the National Institute of Standards and Technology (NIST) will lead several important initiatives. NIST will produce AI red-team testing guidance, a Secure Software Development Framework incorporating AI, an AI Risk Management Framework (NIST AI 100-1), and guidelines to evaluate the efficacy of differential privacy guarantee protections for AI. The AI Risk Management Framework could become one of the most valuable documents to come out of NIST from this exercise. Most organizations don't have the subject matter expertise to generate this level of content, so take advantage of it. Don't wait to establish AI adoption policies. Mark the calendars for when these publications are released, review them, and update the organization’s blueprint and strategy accordingly.

Brace for potential regulatory impact

Make assessing regulatory implications a part of the AI blueprint. This EO will impact the federal government and all its agencies, but its influence will reverberate beyond Washington to private businesses. As the EO states, companies with the "most powerful AI systems" developing "foundation models" will have new testing and reporting requirements for their cybersecurity teams to address. IaaS providers will have to apply "Know Your Customer" controls to ensure that foreign malicious cyber actors don't leverage their AI models for nefarious purposes. Because of the privacy components of the EO, healthcare and financial services companies should also get ready. Companies defined in the 19 "Critical and Emerging Technologies" areas need to follow future guidance closely for potential regulation, but also for innovation and recruiting purposes. Beyond this, the federal government is one of the largest buyers in the world. Federal procurement requirements often apply to the private sector, which would cast a much wider net. Make ongoing monitoring a foundation of the AI blueprint; if the AI impacts the organization, it needs to prepare.  

Although nascent, AI can potentially revolutionize the way companies do business. CISOs who have a well-thought-out blueprint and are on the leading edge and can help their businesses differentiate and succeed in the age of AI.

Rick Holland, vice president and CISO, Reliaquest

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.