Disinformation and misinformation campaigns are cheap, effective, hard to detect at speed, and easy for adversaries to run at scale. But there are still limits. At present, you can only generate meaningful manipulation at the rate human manpower can type it. The future, many believe, will be machine learning algorithms generating the text, expanding the scope of such attacks exponentially.

At their Black Hat talk Wednesday, Georgetown Center for Security and Emerging Technology researchers Andrew Lohn and Micah Musser will discuss preliminary research into the viability of the most well known ML-writing algorithm in research campaigns.

GPT-3, currently the top of the line in automated text, is able to carry out a disinformation campaign at the capability of current human disinformation campaigns, said Lohn based on their test.

"There's not a lot of hope for picking out what is a GPT-3-written thing versus what is a human-written thing. If you look at what humans write in the disinformation space and what is just generally on the internet, always the highest bar to exceed," said Lohn. "GPT-3 might not win Nobel Prize in literature, but it can probably write disinformation tweets that are indistinguishable."

OpenAI's Generative Pre-trained Transformer 3 (GPT-3) is the latest generation of GPT-2, a breakthrough in 2019 in automated production of text. For short spurts of text, it could could jump the uncanny valley and create prose that appeared to come from a human. The writing could be rough, and was prone to veer off-topic within a few paragraphs. Yet, OpenAI was sufficiently worried about the malicious use of the system to it from open use.

GPT-3, released in 2020, is trained on 100 times as much data and is exponentially more powerful. It can still veer off topic, but is much smoother in practice.

In their testing, the researchers found GPT-3 was well-suited for Twitter campaigns. Given a short sample of tweets on a theme, it is capable of producing a torrent of new tweets from the same viewpoint.

"Usually, to the extent you can still tell it's GPT-3 is because if it writes for a long period, it still has a tendency to drift off topic," said Musser. "But that is mitigated if you're just hoping to flood Twitter with tweet-length items."

In fact, in early work, GPT-3 tweets on topics that do not generally inflame partisan passions appear to sway opinions when shown to someone before a poll on a topic, though Lohn and Musser caution that they are not yet at a point where they can say GPT-3 was similarly convincing to a human being or if people are similarly swayed by seeing literally any tweet mentioning any argument.

Lohn and Musser will detail their current state of research on GPT-3 at their talk, as well as some of the possible limits that still exist in using machine learning in campaigns.

GPT-3 is off-limits to unvetted researchers, but commercial and open English language alternatives are being developed. Huawei has developed a ML system of similar scope that operates in Chinese.

While any of those can work at a scale impossible to replicate on a human scale, they are still limited by the cost of compute power.

"If you write a single tweet, it maybe cost, like, a couple cents, max. But if you're going to try to write a billion of them, like what you might need in order to be a sizable fraction on one of these services, then you're talking about some pretty expensive operations. It could be tens of millions [of dollars] maybe up to $100 million range," said Lohn.

The compute power issue could price out many potential actors, including commercial groups targeting a competitor or a political candidate without state backing. But nations are well resourced for these operations.

Russia or China, he said, could very much afford it.