I wanted to start my refreshed blog with a post on Deepfakes, but probably not highlighting the threat you expect. For the past couple of years, I’ve said the real threat from Deepfakes is different from the one discussed most of the time. There’s a lot of handwaving and hype focused on one specific threat, but this can create a distraction from some profound and lasting issues. Let’s look at a couple of other threats posed by DeepFakes and examine why these have a more lasting impact.
When you think of the danger from Deepfakes, you are probably thinking about their ability to convince people something happened that didn’t. This threat is something I call narrative evidence because you are using the content in an attempt to show evidence in support of some larger story. It’s this issue that steals all of the oxygen on the topic. The threat’s stated impact is that it tears at the fabric of reality, and people will believe things because they see and hear it. Although this impact isn’t false, it doesn’t take into account certain actualities.
The fabric of reality is already torn. If anything proves this, it should be the events of 2020. We’ve seen people burn down 5G towers and believe that a major company was shipping children in their furniture. At this moment in the United States, millions of people believe something happened that didn’t with no evidence and no proof. These falsehoods are all perpetuated without the benefit of Deepfakes.
Let’s consider an example In 2019, there was an altered video of Speaker Nancy Pelosi making the rounds on social media. This video was slowed, making her seem as though she was slurring her speech and intoxicated. No high-tech tools were used. Now, how would a Deepfake have changed this? The reality is, it probably would have made little difference. People who wanted it to be true would share it, while others would not.
Thankfully, the creators of fake content are rarely subtle. Someone generating content for Speaker Pelosi would have her saying something about how she enjoys the nourishing effects of child blood or something equally as ridiculous. This ridiculousness is an indicator of future use. In the future, Deepfakes won’t be a tool used to convince people an event happened but instead used to excite a particular group’s existing biases, in much the same way fake content and memes do today. This is because provenance and reality don’t matter in this context.
As resources become more available and tools get easier to use, Deepfakes technology will remove the friction in creating fake content, but this also has a downside for its purveyors. Increased availability and simplification will generate a deluge of fake content, but this increase will normalize the content and make people tune it out. So, this fake content won’t be a tool to expand a particular viewpoint to new people, mostly keep the current crop engaged. While the technology catches up, there is a good bet we’ll see an expansion of services offering Deepfakes as a Service (DFaaS).
The fact of the matter is, we underestimate people’s biases when they evaluate content, and people have gotten pretty good at pwning themselves.
Deepfakes in Attacks
What about the Deepfakes used in attacks? It’s true, there are a couple of instances of Deepfakes being used in attacks, but these are exceptions and not the rule. In general, humans aren’t good at envisioning threats that haven’t happened yet, but once they happen, they do adapt. This adaptation will be the same for these attacks. The success of these attacks only work while the novelty is high, and the novelty wears off quickly.
What About Evidence of a Crime?
I mentioned the word evidence, so what about Deepfakes being used in a court of law? It’s unlikely that this would become a real issue in criminal court. It’s unlikely because there’s usually not a single piece of evidence in a case, so corroborating details wouldn’t exist. Also, techniques are getting better to detect manipulations that wouldn’t survive the scrutiny faced in a court of law. Is it impossible? Certainly not depending on the situation, but it is doubtful that this would become some widespread issue.
Still a Threat?
In the short term, narrative evidence attacks still pose a threat and are something we should be conscious of, so I’m not suggesting we write this threat off. The novelty value is still relatively high. However, I consider narrative evidence attacks more of a short term threat and won’t be the most impactful and long-lasting effect of Deepfakes. In short, the risk is overhyped, not non-existent, and my goal is to get people to focus on some of the more long-lasting problems.
There are several threats from DeepFakes, but the two of the most lasting and impactful fall under the following categories:
- Reality Denial
Reality denial is the opposite of the threat most people claim. The mere existence of Deepfakes is enough for people to question legitimate content. Anytime someone sees evidence of something they don’t like, they can just claim it’s a DeepFake. This situation can have massive ripple effects. I mean, how do you get a fair trial by a jury if the jury is willing to mentally throw out legitimate evidence?
Weaponizing backlash against legitimate content is also much easier to engineer because it takes no effort at all. All of this conducted with no technology, no constructions, and no time. The impact is everyone from friends to nation states can merely raise the question of the content’s provenance, and for many who are biased in that direction, it will be enough. This is the threat that should scare people, but it’s not the only threat. There’s another that can affect you personally.
Deepfakes have the ability to cause harm in instances where provenance and reality aren’t important. Here’s a question to ponder, does it matter whether the fake nudes of you shared online are real or fake? Deepfakes have the ability to take bullying and harassment to the next level since you can steal someone’s likeness and put them in all manner of situations. These situations include pictures, audio, and video. In most cases, it doesn’t matter whether the content is real or not. The impact is the same.
In October of 2020, I reviewed and provided feedback on a report before publication on Automating Image Abuse. The report detailed a Telegram channel where you could strip the clothes off of individuals. The original incarnation of this software was called DeepNude, and that term has stuck to all manner of technology concerning the removal of clothing.
Harassment will be the real legacy of Deepfakes. Consider how ease of use and availability of tools makes harassment and bullying much easier. In the near future, anyone who wants to generate this kind of content will have an outlet for doing so.
This is an area where the legal system can help, and we are starting to see some anti-Deepfake laws, but unfortunately, they are focusing on issues of narrative evidence and not harassment. This issue is something I think will change over the next few years, but the legal system moves slowly. Online platforms and social media companies can help as well, by building tools and punishing users spreading harmful content. Unfortunately, short of legal assistance and cooperation of social media companies, harassment may be one of those cultural issues we have to learn to live with for quite some time.
The Entertainment Industry
The entertainment industry is who should be worried about the technology powering Deepfakes. The disruption caused will be particularly impactful to actors and actresses, meaning they may be out of a job in the future. It would be a mistake to think that the generated content of the future will resemble the CGI of the past.
As an example, the creators of South Park made a Deepfakes television show called Sassy Justice and can be viewed on YouTube. The show features a cast of celebrities (all fake) and, like most things the South Park creators do, is entertaining and educational, performed in an over the top fashion.
In the future, availability and advancements will make it easier for regular people to generate their own worlds, people, monsters, etc. It may very well be that in the not too distant future, people are begging you to watch their feature film like a lot of artists do about their songs today. So it’s not all doom and gloom, depending on your perspective.
In a post-Covid19 world where social distancing and other environmental concerns impact real film shoots, a generated alternative could prove lucrative and allow movie studios and amateurs alike to increase the content.
Genies rarely fit back into bottles, and we need to come to grips with the fact that the technology is here to stay. Focusing only on the narrative evidence aspect of Deepfakes takes attention away from the long-lasting threats. This lack of awareness is apparent in the anti-Deepfakes laws being drafted. We need to make sure we highlight the other threats, such as harassment, so they get more attention from lawmakers and social media companies.