Artificial intelligence holds tremendous promise for technological innovation, but also presents grave privacy and security risks that necessitate government action, according to a new white paper issued today by two U.S. legislators.
Rep. Will Hurd, R-Tex., and Rep. Robin Kelly, D-Ill., respectively the chairman and ranking member of the House Oversight and Government Reform Committee's Subcommittee on Information Technology, released the document, which contains lessons learned from recent government hearings and published research reports by AI experts.
Specifically, the paper recommends that federal agencies "review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction," and then update regulatory frameworks as needed. Moreover it suggests the government "consider the ways [AI] could be used to harm individuals and society and prepare for how to mitigate these harms." Corrective actions could include supporting a standard for measuring the security of AI products and applications, as well as developing a common taxonomy, perhaps through the contributions of the U.S. Commerce Department's National Institute of Standards and Technology (NIST).
Please register to continue.
Already registered? Log in.
Once you register, you'll receive:
The context and insight you need to stay abreast of the most important developments in cybersecurity. CISO and practitioner perspectives; strategy and tactics; solutions and innovation; policy and regulation.
Unlimited access to nearly 20 years of SC Media industry analysis and news-you-can-use.
SC Media’s essential morning briefing for cybersecurity professionals.
One-click access to our extensive program of virtual events, with convenient calendar reminders and ability to earn CISSP credits.