AI benefits/risks, Government Regulations

Under construction: Biden’s AI executive order needs a solid foundation

Biden EO on AI

Although cybersecurity professionals are praising the Biden administration’s executive order (EO) on artificial intelligence (AI) for its measured approach that’s committed to the safe, secure, and trustworthy development of AI, this widely-anticipated 100-plus page EO is as “ground-breaking” as a ceremonial shovel.

This EO offers the scaffolding, with many federal departments and agencies given 90 to 270 days to complete various tasks. The federal AI policies are officially under construction, and just like any construction project requires a strong foundation, so does Biden’s landmark AI EO.

While it has its shortcomings, Biden’s AI EO does represent a significant step forward that requires stakeholders across government, the private sector, academia, and the public to pitch in. Just as the Obama administration’s 2016 cybersecurity EO took many years to foster substantive change, the impact of Biden’s AI EO will also take time to realize. That’s not necessarily bad since organizations will need time to assess the challenges and opportunities of an evolving AI/large-language model (LLM) landscape while protecting their interests, as state governments will also look to flex their regulatory powers.

The Biden EO seeks to cultivate AI's innovation and automation benefits while mitigating its risks, such as unethical outcomes and mass unemployment. AI regulation has drawn bipartisan interest in the U.S. House of Representatives and the Senate, with a flurry of hearings dedicated to exploring balancing innovation, duty of care, civil rights, and national security priorities.

Biden’s EO effectively benchmarks the responsible use of AI through red-team testing, the development of NIST standards for these tests, the development of AI to find and fix software vulnerabilities, privacy by design principles, and avoiding implicit bias. Despite the ambitious scope of Biden’s long-awaited EO, there’s little in the way of implementation, and until NIST standards are developed, it lacks a stable technical framework.

The technology industry should actively work with the government to develop technical standards and outcome-driven policies at home and abroad. Still, it’s the inner circle of cybersecurity professionals that should pay closer attention to how the Department of Homeland Security (DHS) will develop and test methods to leverage AI technologies to assist: “in the discovery and remediation of vulnerabilities in critical U.S. government software, systems, and networks.

Managing the risk of dual-use technologies

The promise of AI-enabled vulnerability scanning may seem ideal, but first impressions are deceiving. There’s a propensity among attackers to use vulnerability scanning and penetration testing tools (Mimikatz and Bloodhound) to conduct attacks. Now that an AI-enabled vulnerability scanner has been put on the DHS roadmap, it stands to reason that other nation-states and APTs could develop malicious AI tools that do the same.

Furthermore, discovering software vulnerabilities only represents half the battle, even without any AI-enabled threat. It tends to take organizations months to patch critical vulnerabilities, assuming they have visibility into which devices are vulnerable in the first place. Furthermore, it’s often challenging (or even impossible) to patch unmanaged IoT and ICS/OT devices.

As it relates to the government, a recent Center for Strategic and International Studies report, found that “visibility and assessment tools can only be effective if they communicate with each other and can collectively offer an accurate, robust, and up-to-date picture of existing vulnerabilities.”

The Biden administration’s continued focus on software vulnerabilities is related to the emergence of supply chain risks (SolarWinds). In 2021, Biden’s EO on cybersecurity called for the creation of standards for a software bill of materials (SBOM) – essentially an ingredient list for software components. Two years later, CISA is still seeking commentary on software identifiers. It’s unlikely that an AI-enabled vulnerability scanner will be available soon.

Pragmatically speaking, organizations should concern themselves more with their existing risks today, before dedicating resources to any AI-enabled boogeyman. Malicious actors are far more likely to target low-hanging fruit, such as vulnerable internet-facing devices, exposed credentials, unmanaged admin accounts, and misconfigured devices, than they are to leverage a malicious use of AI. Gaining visibility into these risks is like knowing where gas lines are buried underground before the digging starts – and requires a fundamental understanding.

A foundation built on the basics

AI has become the shiny object in the room, but organizations cannot allow it to distract them from the basic cybersecurity principles of data confidentiality, integrity, and availability. AI does not negate the need for organizations to focus on the fundamentals of cybersecurity hygiene, such as continuous device visibility, risk and exposure management, vulnerability management, and network access control. Most organizations continue to struggle with these basics, so AI will not magically make those problems disappear. In the future, Biden’s EO could stand as a monument to responsible AI, but today, it’s just a blueprint that requires a solid foundation.

Alison King, vice president of government affairs, Forescout Technologies

Shawn Taylor, regional technology officer, Forescout Technologies, contributed to this column.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.