Artificial intelligence holds tremendous promise for technological innovation, but also presents grave privacy and security risks that necessitate government action, according to a new white paper issued today by two U.S. legislators.

Rep. Will Hurd, R-Tex., and Rep. Robin Kelly, D-Ill., respectively the chairman and ranking member of the House Oversight and Government Reform Committee's Subcommittee on Information Technology, released the document, which contains lessons learned from recent government hearings and published research reports by AI experts.

Specifically, the paper recommends that federal agencies "review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction," and then update regulatory frameworks as needed. Moreover it suggests the government "consider the ways [AI] could be used to harm individuals and society and prepare for how to mitigate these harms." Corrective actions could include supporting a standard for measuring the security of AI products and applications, as well as developing a common taxonomy, perhaps through the contributions of the U.S. Commerce Department's National Institute of Standards and Technology (NIST).

A section of the paper dedicated to the malicious use of AI warns that malicious actors could leverage the technology to improve their targeting and make cyberattack attribution more difficult. APTs could also abuse the technology to generate convincing fake news stories or Deepfake videos in order to destabilize democracies, as well as identify online users who would be most susceptible to this disinformation, the report continues. On the privacy front, the report also expresses concern that machine learning engines must be fed vast troves of user data, which increases the risk of damage if this data is misused or if a data breach occurs.

Moreover, the white paper urges the U.S. to continue to act as a leader in the AI space, especially as authoritarian countries like China invest heavily in machine learning. "AI is likely to have a significant impact in cybersecurity, and American competitiveness in AI will be critical to ensuring the United States does not lose any decisive cybersecurity advantage to other nation-states," the report declares.

The report relies heavily on the previous testimony and research of experts and organizations such as Dr. Ben Buchanan, postdoctoral fellow at Harvard University's Belfer Center; Dr. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence; Gary Shapiro, president of the Consumer Technology Association; and the non-profit AI research company OpenAI.

Outside of security and private issues, the paper also addresses concerns over AI's potential biases and inaccuracies, as well as job loss due to increased automation. Moreover, the paper also addresses strategies for stoking greater AI innovation.