Although deepfakes have only been around for a few years, they are quickly becoming a key element in a new era of digital threats. While some teeter on the more comical side – think Will Smith turned into Cardi B on The Tonight Show – others are creating disillusionment and propelling the era of fake news and misinformation forward.

The rise of deepfakes and the risks they pose is creating questions around who should be held responsible for the spreading of these videos. These questions have become so frequent that earlier this month, the House of Representatives held its first hearing focused specifically on national security threats posed by deepfake technology. As a result of the hearing, the House proposed a change for Section 230 of the Communications Decency Act to be amended to hold social and digital platforms responsible for the content posted on their sites.

With some states already taking action against deepfakes – like Texas and New York – it’s important to understand what constitutes a deepfake, the challenges they present and how to effectively combat them. Let’s explore these topics in more detail.

Defining Deepfakes

The term ‘deepfake’ has been used broadly to describe nearly any type of edited video online – from a mash-up of Steve Buscemi and Jennifer Lawrence to Nancy Pelosi’s slowed speech. However, the term deepfakes is much more specific, relating to an AI-based technology that is used to produce or alter video content so that it presents something that didn’t actually occur.

Going off this definition, the famous video of Nancy Pelosi slurring her words does not actually classify as a deepfake as the video is a slowed-down version of a speech she actually gave. A video like the one of Pelosi is simply an altered video and is sometimes referred to as a “shallow fake.” Important to note here that although the video is not technically a deepfake, it is still problematic as manipulated, altered, or entirely false videos represent the same core risk: disinformation.

Understanding the Challenges

As the volume of deepfake videos and the risks they pose continue to grow, social and digital platforms can no longer ignore the pressing issues they present. Beyond simple falsehoods, the implications of deepfakes go much further because of what they represent.

In an era of fake news and misinformation, these videos create a disillusionment that even that which can be seen right in front of you can still be false. With the upcoming 2020 elections, deepfakes serve as a new method for spreading misinformation, generating false influence and targeting individual candidates and parties. While deepfakes have real implications for political discourse globally, candidates and political parties are not the only ones who should be concerned.  As deepfakes become cheaper and easier to produce, they pose impersonation-related risks for companies across industries, geographies, and sizes. 

Today, many social and digital platforms are wrestling with how to tackle deepfakes. Already there are Reddit threads, Tumblr accounts, and code sharing forums like GitHub dedicated to sharing deepfakes. There are also AI apps designed specifically to help users create their own deepfakes quickly and inexpensively. What’s more, social media platforms have been hesitant to remove manipulated footage and deepfakes, and many have differed in their approach to handling these videos. In the case of the doctored Nancy Pelosi video, Youtube quickly removed the video while it remained up on other social media platforms. With no clear protocol for handling deepfakes, and the tools to create these videos readily available, organizations and social media platforms are grappling with what action to take.

Combating Deepfakes

With algorithms that favor visual, dynamic content, these types of videos tend to quickly go viral. As organizations and social and digital platforms look to start combating these new threats, they should start by taking the following steps:

  • Protect social media accounts against account takeover to prevent malicious posting – whether it be a phishing link or a deepfake video – on your behalf.
  • Monitor social channels for mention of your brand and top executives.
  • Take action when a threat occurs before posts have a chance to go viral.