Critical Infrastructure Security, Critical Infrastructure Security

Russia’s 2016 election interference was highly organized, but fixes for 2020 are possible: reports

The campaign by Russia's Internet Research Agency to interfere with the 2016 U.S. presidential election using fake Twitter accounts was even organized than many people realize, according to a new report from Symantec Corporation. But another new report from scholars at Stanford University prescribes more than 45 policy recommendations for how the U.S. can prevent a repeat performance of Russian meddling in 2020.

The latter report, titled "Securing American Elections" represents the culmination of a study conducted by a team of scholars with expertise in areas such as cybersecurity, social media, election regulations, Russia and more.

The recommendations are subdivided into seven categories: bolstering election infrastructure, regulating online political ads from foreign entities, countering election manipulation by foreign media, fighting state-sanctioned disinformation campaigns, improving transparency of foreign involvement in U.S. elections, establishing norms, and deterring future attacks.

The report's suggestions to improve election infrastructure include requiring voter-verified paper audit trails and risk-limited auditing, conducting risk evaluations of election systems with an adversarial point of view, establishing norms for campaign officials' digital behavior, regularly funding efforts to strengthen election cyber posture, retaining U.S. electoral systems' status as critical infrastructure, and allowing political parties to offer cyber assistance to state parties and individuals running for federal office.

Among the report's key recommendations for deterring foreign election interference is for the U.S. to adjust risk tolerance levels for actions in cyberspace. "To be more successful in deterring adversary behavior, the United States must begin by increasing its willingness to accept greater risk of adversary retaliation, including retaliation in the cyber domain," the report states. "Today, U.S. opponents are counting on American aversion to cyber risk in order to deter U.S. responses."

The report also alludes to Russian ATP groups hacking the Democratic National Committee as well as Hillary Clinton campaign chairman John Podesta and subsequently leaking the documents. To prevent similar actions from succeeding in the future, the report's scholars are encouraging democracies around the world to establish norms that deter candidates and their parties from capitalizing on disinformation and hacked materials for their own gain.

Looking ahead, the report predicts that future election interference could involve the changing of voting counts and records, and the intentional sabotage of voting machines and operations. New and improved technologies such as deepfakes, AI text-generation engines, and more sophisticated bot networks will further complicate matters, the paper continues.

Moreover, "Additional actors also should be expected to join Russia in future attempts to influence political discourse online," the report states, "as the barriers to mounting disinformation campaigns will depend less on available computing power and technical skill, and more on the ability to quickly iterate among strategies, produce text in naturalistic English or another targeted native language, and identify psychological vulnerabilities in a target segment of the electorate."

"We know more than ever before about what happened in the 2016 election," said the report's editor and co-author Michael McFaul, former U.S. Ambassador to Russia and an international studies professor and senior fellow at the Freeman Spogli Institute and Hoover Institution.

"Now we need to pivot to what needs to be done to prevent it in the future – from concrete legislative acts as well as steps that online platforms can take even without legislation," he said in a university press release.

Had some of the Stanford report's recommended practices been in place in 2016, perhaps the U.S. could have muted Russia's misuse of Twitter and other social media accounts as it sought to sway the 2016 U.S. election through a mix of disinformation, propaganda and inflammatory content designed to sow discord among Americans.

In a new report titled "Twitterbots: Anatomy of a Propaganda Campaign," researchers at Symantec Corporation provide new insights into the Russia's ongoing influence campaign by analyzing a dataset of fake Twitter accounts and posts that were generated by Russia's Internet Research Agency.

Publicly released in October 2018 by Twitter, the massive dataset was composed of nearly 10 million tweets and 3,836 Twitter accounts that collectively accrued nearly 6.4 followers and followed roughly 3.2 million accounts from 2013 through 2018.

Many of these accounts were created long before they published their first malicious post, which suggests the Russian actors carefully plotted and coordinated their operations in advance, Symantec reported. The average time between account creation and first post was 177 days, while the average span of time an account remained active was 429 days.

The fake accounts were essentially split into two groups: main accounts and auxiliary accounts.

A small, core group of 123 "main accounts" – the majority of which were created between May and August 2014 – were tasked with pushing out new posts and amassing followers who would retweet their malicious content. These accounts often posed as regional news outlets or political organizations with phony names such as New Orleans Online and San Jose Daily.

"The vast majority (96 percent) of these fake news accounts were fully automated, using services to monitor blog activity and automatically push new blog posts to Twitter," the report, authored by Gillian Cleary, Symantec senior software engineer, explained. However, Symantec found that Russian actors at times would also make manual changes, as needed, to look more authentic and reduce odds of detection.

The most retweeted account, TEN_GOP, was crafted to look like it was created by a group of Republicans in Tennessee. The account posted 10,794 times and prompted over 6 million retweets. Only 1,850 of these retweets were traced to other accounts observed in the IRA dataset, "meaning many could have been real Twitter users," the report says.

Far outnumbering the main accounts were 3,713 auxiliary accounts, typically disguised to look like ordinary, individual people. Instead of generating content and gaining followers, these accounts would retweet other accounts in order to amplify the Russian actors' message.

These tweets observed in the dataset aimed to influence voters on both sides of the political spectrum, but were designed to target those feeling "disaffected" by politics, the report continues.

Examples of a fake right-leaning account included one with the user name "TheFoundingSon," whose profile description read "Business Owner, Proud Father, Conservative, Christian, Patriot, Gun rights, Politically Incorrect. Love my country and my family #2A #GOP #tcot #WakeUpAmerica". In contrast, a fake left-leaning account featured the user name KaniJJackson and a profile that said "Follow the example set by Mrs Obama; peace, love, acceptance & vigilance #Impeach45 #Resist #GunReformNow."

Russian activity was especially busy in the lead-up to the last presidential elections. Symantec says the fake accounts generated a total of 771,954 English-language tweets between January and November 2016, with a marked increase from September through November.

"While this propaganda campaign has often been referred to as the work of trolls, the release of the dataset makes it obvious that it was far more than that," Cleary concluded in the report. "It was a highly professional campaign. Aside from the sheer volume of tweets generated over a period of years, its orchestrators developed a streamlined operation that automated the publication of new content and leveraged a network of auxiliary accounts to amplify its impact."

Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.