AI benefits/risks

The Biden EO on AI: A stepping stone to the cybersecurity benefits of AI

,
Biden EO on AI

While the Biden administration’s executive order (EO) on artificial intelligence (AI) governs policy areas within the direct control of the U.S. government’s executive branch, they are important broadly because they inform industry best practices and subsequent laws and regulations in the U.S. and abroad.

Accelerating developments in AI — particularly generative AI — over the past year or so has captured policymakers’ attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) has further heightened attention in Washington. In that context, we should view the EO  as an early and significant step addressing AI policy rather than a final word.

Given our extensive experience with AI since the company’s founding in 2011, we want to highlight a few important issues that relate to innovation, public policy and cybersecurity.

The EO in context

Like the technology it seeks to influence, the EO itself has many parameters. Its 13 sections cover a broad cross-section of administrative and policy imperatives. These range from policing and biosecurity to consumer protection and the AI workforce. Appropriately, there’s significant attention to the nexus between AI and cybersecurity, and that’s covered at some length in Section 4.

Before diving into specific cybersecurity provisions, it’s important to highlight a few observations on the document’s overall scope and approach. Fundamentally, the document strikes a reasonable balance between exercising caution regarding potential risks and enabling innovation, experimentation and adoption of potentially transformational technologies. In complex policy areas, some stakeholders will always disagree with how to achieve balance, but we’re encouraged by several attributes of the document.

First, in numerous areas of the EO, agencies are designated as “owners” of specific next steps. This clarifies for stakeholders how to offer feedback and reduces the odds for gaps or duplicative efforts.

Second, the EO outlines several opportunities for stakeholder consultation and feedback. These will likely materialize through request for comment (RFC) opportunities issued by individual agencies. Further, there are several areas where the EO tasks existing — or establishes new — advisory panels to integrate structured stakeholder feedback on AI policy issues.

Third, the EO mandates a brisk progression for next steps. Many EOs require agencies to finish tasks in 30 or 60-day windows, which are difficult for them to meet at all, let alone in deliberate fashion. This document in many instances spells out 240-day deadlines, which should allow for 30 and 60-day engagement periods through the RFCs. 

Finally, the EO states plainly: “as generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI.” This should help ensure that government agencies explore positive use cases for leveraging AI for their own mission areas. If we can use history as a guide, it’s easy to imagine a scenario where a talented, junior staffer at a given agency identifies a good way to leverage AI at some time next year that no one could easily forecast this year. It’s unwise to foreclose that possibility, as we should encourage innovation inside and outside of government.

The EO’s cybersecurity provisions

On cybersecurity, the EO touches on a number of important areas. It’s good to see specific callouts to agencies like the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA) and Office of the National Cyber Director (ONCD) that have significant applied cyber expertise.

One section of the EO attempts to reduce risks of synthetic content: generative audio, imagery and text. It’s clear that the measures cited here are exploratory in nature rather than rigidly prescriptive. As a community, we’ll need to innovate solutions to this problem. And with elections around the corner, we hope to see rapid advancements in this area.

It’s clear the EO’s authors paid close attention to enumerating AI policy through established mechanisms, some of which are closely related to ongoing cybersecurity efforts. This includes the direction to align with the AI Risk Management Framework (NIST AI 100-1), the Secure Software Development Framework, and the Blueprint for an AI Bill of Rights. This will reduce risks associated with establishing new processes, while allowing for more coherent frameworks for areas where there’s only subtle distinctions or boundaries between, for example, software, security and AI.

The document also attempts to leverage sector risk management agencies (SRMAs) to drive better preparedness within critical infrastructure sectors. It mandates the following:

“Within 90 days of the date of this order, and at least annually thereafter… relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks, and shall consider ways to mitigate these vulnerabilities.”

While it’s important language, we also encourage these working groups to consider benefits along with risks. There are many areas where AI can drive better protection of critical assets. When done correctly, AI can rapidly surface hidden threats, accelerate the decision making of less experienced security analysts and simplify a multitude of complex tasks.

This EO represents an important step in the evolution of U.S. AI policy. It’s also very timely. As we described in our recent testimony to the House Judiciary Committee, AI will drive better cybersecurity outcomes and it’s also of increasing interest to cyber threat actors. As a community, we’ll need to continue to work together to ensure defenders realize the leverage AI can deliver, while mitigating whatever harms might come from the abuse of AI systems by threat actors.  

Drew Bagley, vice president of cyber policy, CrowdStrike; Robert Sheldon, senior director, public policy and strategy, CrowdStrike

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.