Even though the 2024 US election is less than two weeks away, we are still being bombarded with news reports and experts blaring on about the dangers of deepfakes for this election. When I was recently traveling abroad, I witnessed the BBC host self-proclaimed “experts” from the US predicting the catastrophic impacts deepfakes would play on the 2024 US election, and yet, nothing. It seems deepfakes are having a bad time in the information manipulation arena.
There also seems to be massive confusion between deepfakes, slop, and memes. Reporting on the topic lumps all of these into the same category just because AI was used to generate them, but this misses the point entirely because they are not the same thing and have different goals.
Since this will likely be the last piece I write about deepfakes before the 2024 US election, let’s break some of this down. I’ve covered this territory multiple times since 2020, and everything I’ve written, including my predictions, has held up. However, this article isn’t a victory lap. These previous posts remain valid, but we have some new terms now. Let’s take them piece by piece.
My Position
My position has always been that deepfakes won’t affect the outcome of elections or change a person’s mind on polarizing topics. This should be relatively obvious because the truth doesn’t change people’s minds in similar situations. Deepfakes do fool people when there are no stakes. When people don’t care about the topic, think Pope in a puffer jacket or Katy Perry’s dress at the Met Gala.
For more information on my position, I’ve written multiple pieces going back to 2020. I won’t retread the same ground in this post.
- Illusion of Influence: the AI-Generated Misinformation Apocalypse That Wasn’t
https://perilous.tech/2024/07/18/illusion-of-influence-the-ai-generated-misinformation-apocalypse-that-wasnt/ - Generative AI Deepfakes and Elections: Apocalypse or Dud
https://perilous.tech/2023/08/28/generative-ai-deepfakes-and-elections-apocalypse-or-dud/ - Deepfakes: a Different Threat Than You Thought
https://perilous.tech/2020/12/01/deepfakes-a-different-threat-than-you-thought/
We’ve had multiple major elections across the globe, and deepfakes have played no role in the outcome. The 2024 US election has been dubbed the “deepfake election,” and yet it’s been a big nothing burger. Despite all of this, the threat of deepfakes related to elections remains wildly overhyped, ultimately because of perverse reasons. Fear gets clicks.
For some recent evidence proving this point, look at the report OpenAI released just this month on Influence and Cyber operations. None of the operations they tracked had any meaningful engagement. Although it’s true that OpenAI isn’t the only game in town, this report does seem to track with other observations.
Deepfakes, Slop, and Meme Confusion
Let’s start by defining the difference between deepfakes, slop, and memes. People can be forgiven for this confusion since it’s all generated by AI, and the definitions of these terms can be confusing. For example, let’s start with deepfakes.
Deepfake
Webster’s dictionary defines a deepfake as:
an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said
Well, that’s not very helpful. Wikipedia’s definition is a bit better, but it’s still not good.
Deepfakes (a portmanteau of 'deep learning' and 'fake') are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Although technically accurate, any definition of deepfake can’t be considered without the intention. The mental picture of deepfakes in the public eye is that they are generated to support a larger narrative or trick someone into believing something. So, they either provide the evidence for a narrative or are used in a social engineering attack. Back in 2020, I referred to these as Narrative Evidence attacks. Simply put, people consider deepfakes as evidence used to convince people that something happened that didn’t.
Let’s look at a simple example. I may have a narrative that you stole a car, but I have no evidence to convince people that you stole it. To support my narrative, I create a deepfake video of you stealing the car as the “proof” supporting my narrative. This usage makes the scenario a deepfake and not purely slop or a meme.
AI Slop
AI slop has filled the internet. It’s unavoidable, slathered across every corner of the web, and is one of the lasting legacies of generative AI. If you’ve ever heard of things like Shrimp Jesus, that’s what we are talking about. It’s a sort of low-quality AI-generated content that fills a digital placeholder.
Here is a great comment from Twitter user @nearcyan summing up YouTube shorts with AI Slop.

There is no confusion about the truthfulness of slop. Nobody thinks The Rock is part of any of these videos on YouTube. Its purpose is merely as content to fill a placeholder and try to get clicks. In the context of elections, people may leverage slop with a twist of propaganda, using generated counterfactuals to elicit an emotional response.
Take this image, for instance.

After Hurricane Helene, a bunch of AI-generated images flooded social media. People shared these images to try and get an emotional response. This is more like propaganda. There’s no doubt that some people may have believed the image was real, but a vast majority of people who shared, shared it because it aligned with their biases and supported their message. The image was merely an emotional placeholder. Oddly enough, they would have shared it regardless of the image.
Memes
Memes need no introduction and far predate the generative AI era. However, Generative AI has generated memes of its own. Take, for instance, the videos of Will Smith eating spaghetti. The poor quality of the video made it a meme of its own. So when the AI-generated video of Will Smith eating spaghetti with Donald Trump surfaced, everyone should have gotten the joke.

Just like AI slop, with memes, there is no confusion about the truthfulness of the content.
In the context of elections, deepfakes are AI-generated content meant to fool people into believing something happened, while AI slop and memes are more akin to propaganda.
Spot The Deepfake is Pointless
By now, you’ve undoubtedly run across many of the spot-the-deepfake challenges where you cycle through a series of images or videos and try to determine which is real and which is AI-generated. Other than creating some very basic awareness of the capability of the technology, these exercises are pointless.
Here is a recent example I saw posted online of someone telling people what to look for. It’s the same kind of inane advice parroted repeatedly that won’t hold up.

The reality is that asking which image is real or fake gives an irrelevant answer to the wrong question. These spot-the-deepfake challenges are misleading because it doesn’t matter which one is real or fake. Also, training people to “spot” characteristics in images and videos that are disappearing or changing rapidly doesn’t set them up for future success.
Deepfakes Aren’t In Isolation
Deepfakes and other fake content meant to fool people aren’t encountered in isolation. They are provided as evidence to support a larger narrative. This means you’ll never just have the fake content with which to make your decision.
Consider a doctor making a diagnosis and prescribing a treatment. It would be rare for a doctor to look at an image, make a diagnosis, and prescribe a treatment. The doctor will use additional context in diagnosis and treatment. First, they may order additional tests for verification. Also, they consider other contextual information such as medical history, family history, allergies, and a slew of other information before moving forward.
The real question has little to do with the deepfake itself. What we are evaluating is the message. So, given the source and surrounding context, can the overall message be believed? With misinformation and disinformation, Deepfakes aren’t the message; they are the proof.
Below are questions you can ask to mentally focus on the message. This is not an all-encompassing list of questions, but it can get you thinking.
- Who is sharing?
- Are they credible?
- What’s their motive?
- What have they said in the past?
- Are there conflicting or contradictory accounts?
- Have claims been fact-checked?
The fact of the matter is that getting to reality takes work. It’s work that, unfortunately, many won’t put in. It’s far easier to like and share.
Bad Reporting
The lousy reporting on deepfakes is constant, but let’s look at a couple of recent examples.
Take this article called: What Happened to the Deep Fake Election. Given the title, it would seem that it would be a step in the right direction. However, this person draws all the wrong conclusions.
Then there’s the article Welcome to the AI Election, which is really bad. This article is a whole bunch of fear-based nonsense. This is hilarious since the title claims this is the AI election but claims that the real AI election will be the 2026 mid-term elections.
By the time we get to the 2026 midterms, AI will be so much more advanced that in the hands of the right (or wrong) people, it’ll be able to generate hyper-realistic video content, which could be used to create personalized political narratives tailored to each voter’s psychological profile.
This is just straight-up nonsense. This person has no idea how any of this works. First of all, the cost of individual generation for each person would be an astronomical expense. You’d have the expense of the data collection and personalization components and the cost to generate video clips for each person. Second, this would require massive collusion, requiring tech companies and social media to play a part in the data collection and dissemination of the content.
For instance, they might be used to target us individually based on our biomarkers. Sorry, I forgot to mention AIs will soon have more information about us on a biological level, including our health and behavior. Why? you might ask. Because you’ll give it to them through apps and programs you’ll engage with, or are already engaging with.
What planet does this guy live on? Certainly not Earth. In all seriousness, I think I agree with the odd underlying point he’s trying to make. The privacy implications here are indeed very concerning, and we do risk being manipulated by these systems. So, I’m on board with that. However, shoehorning that into election manipulation, especially by 2026, is patently ridiculous. Who is the big bad person pulling those strings with all of the access to that data? In 2026, we’ll still have disparate systems and individual collections of data. Even if this were possible, it would still require a bunch of collusion between providers. This is a conspiracy theory dressed up like a technology prediction.
Just Not That Stupid
It’s difficult to grasp that other people aren’t that stupid. I think this is one factor that keeps the fear of deepfakes stoked. Ultimately, people believe what they want to believe, real or not. It’s been this way since the dawn of civilization.
As I mentioned in a previous article, we don’t see ourselves in other people. We see ourselves as outliers instead of the mean. This often gets warped by algorithms that keep us in a bubble and promote the most outlandish content. As someone who lives in Florida, this hits close to home with all of the nonsense about people thinking Hurricane Milton was not only manufactured but controlled. This resulted in meteorologists getting death threats.
If you tried to write a book on stupidity, this is so stupid that nobody would believe it, but then again, many conspiracy theories turn out this way. Conspiracy theory has morphed into a cult or a religion.
Back in 2021 I wrote, “Conspiracy theorists are like cult members, only worse. Worse, because a cult has a leader, but conspiracy theories make you the leader.” This everyone is a leader concept is incredibly empowering and addictive, making people both the hero and the victim. Despite this, most people don’t have deep conspiracy beliefs, yet they can receive outsized attention in both social media and traditional media alike. Don’t fall for it.
How Are They Being Used
So, how are deepfakes being used in the 2024 US Election? They aren’t, well, to be more precise, not with any relevance. Primarily, you see memes and slop, precisely as I predicted in previous articles. There was a boatload of bad reporting on the topic, but it doesn’t match reality. An obvious example was the Taylor Swift AI endorsement of Trump. These were reported as though they were deepfakes, but the whole incident was silly. If you went and reviewed these images, one is an AI-generated image of Taylor Swift dressed as Uncle Sam. The rest are Swifties for Trump. Comically enough, one of them is literally labeled as “Satire.” Where is the facepalm emoji when you need it?

Campaigns and political action committees (PACs) have used AI to generate content used as counterfactuals for campaign ads. This is something I think is disgusting, but they are hardly deepfakes. They are slop with a propaganda twist.
There was the fake Joe Biden voice call in New Hampshire, which was a legitimate deepfake. However, nobody believed it, and it had no impact. This isn’t unlike every other deepfake this election cycle.
Interestingly, Trump claimed that Kamala Harris’ crowd in Detroit was AI-generated. I also predicted this type of accusation behavior back in 2020 in my initial deepfakes article. Okay, so maybe now I’m taking a small victory lap.
Conclusion
It’s time to take a deep breath. We are less than two weeks away from Election Day in the United States. Undoubtedly, the AI election is here, and it’s more silly (and sometimes pathetic) than terrifying. Despite this, it will not be the last we hear of the fear-mongering. Fear sells, and fear gets clicks. Now, get out and vote.