AI benefits/risks

Expect AI to play a major role in this year’s election cycle

AI and Elections

Artificial intelligence (AI) will play multiple roles in this election cycle, and it’s critical to understand how those roles could affect the democratic process. AI offers tools to enhance and offer understanding to all involved in elections. However, bad actors can use AI tools to spread false narratives, affect public opinion, and influence the outcome of an election.

Tools that use large language models (LLMs) and deepfakes have changed and enhanced the way we can create and disseminate information. These tools have also allowed for the rapid creation of text, realistic images, and videos that look like authentic content, but are deceiving, misleading or inaccurate, making it difficult for individuals to decipher what’s true and what isn’t.

Bad actors can use these types of AI capabilities for misinformation (the spread of false or inaccurate information) and disinformation (the spread of false or inaccurate information with the intent to mislead), presenting a major challenge to ensure that elections maintain their integrity. And, this problem extends beyond just elections to many other aspects of society, as evidenced by recent events in which AI did deepfakes on identity verification systems.

Detection and identification strategies

AI detection technology has become essential to tackle misinformation and disinformation campaigns and it’s crucial to detecting deepfake content. It leverages audio and visual data to determine if content gets generated by AI technology. Examples of this technology include Resemble Detect, FakeCatcher by Intel and Microsoft’s Video Authenticator Tool.

In addition, before, during, and after elections, AI can play a role in detecting misinformation and disinformation campaigns by performing these actions:

  • Automated Content Analysis: AI can process a lot of information quickly, identifying potential misinformation in articles, social media posts and videos.
  • Pattern Recognition: AI algorithms are very good at recognizing patterns indicative of false narratives or fake content, such as deepfakes.
  • Real-time Monitoring: AI tools allow for continuous surveillance of digital platforms, detecting and flagging misinformation as it emerges.
  • Assisting Fact-Checkers: AI can assist human fact-checkers by pre-screening content and highlighting dubious claims for further review.
  • Public Awareness and Education: We can use AI-generated data to inform the public about current misinformation tactics. This can help to build resilience against misleading content.

Why humans are still important

There are challenges in detection and limitations of current technologies, and often, human interaction is still essential to determining misinformation and disinformation. First, current AI models are often limited in detecting new disinformation and misinformation. Usually, human fact checkers identify new types of misinformation and this gets fed into an AI tool to find similar types of content. This approach uses AI as a force multiplier, but it acts in conjunction with humans, rather than replacing them, to find new types of unwanted content.

AI models used for disinformation and misinformation detection are often “black box” solutions that are not transparent in these situations. Users of the technology may be concerned because of possible biases in the training data, which may show up as gender or racial bias and cause false positives, misleading and unwanted findings.     

Additionally, organizations may not have the expertise and resources to manage AI models. Moreover, users of the technology must understand organizational and government policies, and this lack of expertise may negatively affect counter-disinformation efforts.        

It’s not possible for society to combat election misinformation and disinformation using technology alone. But rather, it’s a shared responsibility involving individuals, government and organizations. Individuals need to think critically about the media sources they encounter - with reliable news sources separated from questionable ones. Next, information must get fact-checked before shared on social media. Any disinformation and misinformation should be reported to the platform where it’s found. Individuals should also continue to educate themselves and others on tactics used to spread this type of content. Lastly, the user should seek diverse perspectives to avoid being trapped in an echo chamber.

Governments and organizations should establish and enforce laws that prevent the spread of misinformation and disinformation in ways that do not infringe on free speech. Media outlets of all types must uphold journalistic standards, ensure that data is accurate, and offer balanced reporting. Technology companies must combat the spread of misinformation and disinformation through technological and human moderation. We need for our academic and media organizations to conduct public awareness campaigns to research misinformation trends and organize public awareness campaigns. Finally, local public-private partnerships should support independent, fact-checking organizations.            

One final point to keep in mind: AI can pose future challenges for security teams on a wider scale beyond just elections. These challenges can happen more generally and target individuals and organizations across the U.S. and beyond. And, just like with disinformation and misinformation, security teams can and should leverage the same detection and identification tools paired with analysts to determine and thwart potential threats.

Taylor Banks, director of program development, GroupSense

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.