We are about to be inundated with stories of misinformation and deepfakes, all focused on the 2024 US election. I know the last thing most people in the United States want to consider is the 2024 election. Election cycles are tiring, but even before we get into full swing, there are already grumblings about AI. I mean, why wouldn’t there be? It’s been all AI all the time. Generative AI is here, in case that’s something you’ve somehow failed to notice. Methods for generating text and images keep getting better and better, and they are far more accessible than they’ve ever been.
I’ve pulled no punches that I think the capabilities of LLMs are overhyped, but they excel in the areas useful for generating misinformation. I’ve even said that this would be the year that generative AI starts replacing jobs, something that appears to be already happening. So, with a looming election, highly capable systems, and low cost of generation, what effect will generative AI have on the 2024 US Election?
So here’s my claim: Misinformation and Deepfakes won’t affect the outcome of the 2024 US election. More accurately, it will have a “statistically insignificant” effect on the 2024 US election.
Note: For this post, I’m using the term misinformation to cover instances of misinformation and disinformation.
Generative AI and Wide Availability
Due to the recent boom of generative AI, the 2024 US election will be the first major US election where these tools are widely accessible. This accessibility extends to everyone involved, including campaigns, nation-states, malicious actors, and even the general public.
To take accessibility a step further, this can be done very cheaply. People don’t have to use the models hosted by providers like OpenAI, Stability AI, Midjourney, etc. Models for generating text, images, and audio can be run on consumer machines or at least machines that aren’t much bigger than consumer machines. These models are also available without the typical guardrails. With all of this availability and ease of access, that begs the question, won’t this lead to a misinformation apocalypse?
2024 Misinformation Apocalypse? Not So Fast
Misinformation in the context of generative AI means the purposeful manufacturing of false information in photo, video, text, or audio formats with a particular goal. This content is then used to serve a message around events and activities that either didn’t happen or reframe events that happened differently. I refer to this as “narrative evidence,” I wrote about this back in 2020. You are manufacturing false content as evidence to support a larger narrative. This narrative is meant to support a position or demonize someone else but with a goal in the case of an election. Fortunately for us, this condition only remains highly effective when the novelty factor is high, and this novelty factor is dropping quickly.
In the context of an election, misinformation is meant to sway opinion and affect voters. For example, this example of ludicrous claims that high-profile figures in the Democratic Party are actually on house arrest, with the associated and laughable proof. No AI is necessary in this case. Spreading content like this is meant to convince people that voting for people in the Democratic Party is a bad idea and they should vote the other way (or stay home), but it doesn’t work that way in practice.
Misinformation at scale has both logistical and social challenges, so let’s look at the Generative Misinformation Cycle.
Generative Misinformation Cycle
Let’s break down the generative misinformation cycle into a few different steps. Breaking this down into several steps helps to highlight what’s easy and what really matters.

Generation – This step is the creation of the content. This step is easy and mostly friction-free, even without generative AI. What Generative AI brings to the table is an increase in velocity, not precision. So you can generate misinformation much faster and create more volume, but there’s no guarantee that misinformation will be better, and quite often, it can be worse than human-generated misinformation. For example, try getting an LLM to explain why the Distracted Boyfriend meme caught on. I mean, it’s difficult for humans to explain why certain things catch on as well.

There are quite a few cultural movements to latch on to that LLMs don’t understand, but there’s no doubt you can create massive amounts of content with generative AI. Sure, once a cultural movement has been identified, a bad actor can then try to latch on to it by automatically generating misinformation, but this slows down the process and is less effective.
Amplification – A piece of misinformation does no good if nobody sees it. Amplification is getting that content in front of the eyes of as many people as possible. Preferably the people who’d most likely engage with it since more engagement leads to more amplification. You’ll also increase the potential success of the intended outcome of the misinformation.
When it comes to amplification, it’s not as hard to amplify as some would have you believe. Nation-states have an army of people that amplify content. If you can hit the right chord aligning with people’s biases, they’ll amplify the content.
Engagement – Engagement is getting people to interact with the content. This could be in liking, sharing, or even commenting on it. The more engagement, the more false consensus is built around the content. This engagement can feed back into the amplification phase through algorithmic amplification on social media or merely exposing others to the content. It would be a mistake to assume that engagement leads to an outcome. People share things they don’t read all of the time because the title agrees with their biases.
Outcome – This is the action the misinformation is intended to have. This may increase votes for a party or candidate or get people to believe something. This is where misinformation really matters. It’s not so cut and dry as a call to action, but it could be a change of mind on a topic.
For any piece of misinformation to be effective, there needs to be a successful outcome. This is much harder than it seems. Amplifying and increasing engagement seems like the goal, but it’s not. Many people discussing AI-generated misinformation talk about how well it can structure articles and provide references. But we know that many sharing content don’t read the content they share.
Mental Cement

People have made politics (and many other things) religions now. We’ve had a pandemic and lockdowns for people to spend an inordinate amount of time online and cement their biases. Every bit of content we encounter, we apply our biases to it. If it’s something we like, we assume it’s true. If it’s something we don’t like, it must be a deepfake. I mentioned the concept of claiming deepfakes in my 2020 post, and it seems even Elon Musk has made this a reality.
Almost no amount of misinformation will get people to change their minds about something they believe in. It’s why it’s so hard to get people out of cults, change religions, or even political parties.
Getting people to change these fundamental things after cements takes a massive effort. My dad was one of the few who did change religions, but only because of my mom. People occasionally also switch political parties, but it’s also rare. It’s much more likely to have people become unaffiliated. People don’t switch religions; they leave religions. People don’t switch political parties; they become independent. This may be a silver lining when it comes to misinformation. I’ll get to this later.
Convincing someone to believe in misinformation only works if you have two fundamental aspects. A non-politically charged topic and something that doesn’t go against the strong biases of the person encountering the content.
Convincing someone to believe in misinformation only works if you have two fundamental aspects. A non-politically charged topic and something that doesn’t go against the strong biases of the person encountering the content. It’s certainly not impossible, but the climb is significant.
Instances Don’t Equal Impact
You’ll see the press and pundits point out instances of misinformation as proof that it’s having an effect. This isn’t the case. We’ll most certainly see more content, AI-generated or otherwise, focused on the 2024 election. An Increase in content doesn’t equal an increase in influence or effects on a significant scale. This would be the “Outcome” step in the Generative Misinformation Cycle.
In the context of the election, misinformation, and deepfakes will not be used to change people’s minds but to excite the base and poke fun at the opposite candidate. In 2024, people will wage meme warfare, and generative image models will be their weapons.
CounterCloud
CounterCloud is an experiment in fully autonomous disinformation, and it’s terrifying to some people.

It’s a neat experiment in what’s possible, and the approach is interesting for creating counter-narratives. You can read more about it here. However, once again, this overlooks the fact that many people don’t read the articles. They share based on the headlines. It also has other more fatal flaws, such as it works to drive people to a single site, even though it can use social media to drive attention there. Ultimately, this would be identified pretty quickly. And yes, lessons learned here could be more stealthy, but we still have the same issues I covered in this post.
But, Deepfakes Tho
Nowhere does the misinformation become spicier than the arguments about deepfakes. When I relaunched this blog back in 2020, the topic of Deepfakes was the first I tackled. I mostly focused on how their threats weren’t appropriately phrased and overhyped. Imagine that. I felt the real legacy of deepfakes lies in their ability to harass versus their convincing people that something happened. I still feel this way. Fooling people only works while the novelty factor is high, then there is a steep drop off.
Let’s look at Pope in a Puffer Jacket, also known as Balenciaga Pope. I know this image fooled many people, which seems to go against my point in the post, but not so fast.

The Pope in a puffer jacket image fooled people because nobody cared about the Pope or his jacket. If this were a politically charged topic or a topic that people were highly biased toward, it would have received much more scrutiny.
Meme Wars
Generative AI will most likely be used to create memes and caricatures during the election cycle. This won’t all be malicious. Some of it will be downright hilarious (depending on which side of the political spectrum you are on), such as the images created of RuPublicans.

Although some memes and content will be good fun, much of it will be malicious. If generative image tools restrict the ability to generate political figures, then that could slow down this meme war a bit, but some of these models are open source and could be run on systems without these guardrails. So, we’ll see as soon as the election cycle starts heating up.
Misinformation and Deepfakes: Still a Problem
Just because I don’t think misinformation and deepfakes will affect the 2024 US election and don’t always work in high-stakes situations doesn’t mean I don’t think these are a problem. In my previous post, I wrote that I felt the real legacy of deepfakes would be in their use in harassment. So, activities like mocking people or creating non-consensual porn are two examples of this.
Also, there are so many non-politically charged situations where it’s easy to fool people. Where the stakes are low, nonsense will proliferate. Just like Ted Cruz recently fell for the old shark in a waterway hoax.

This does bring up another issue, and that is we are creating an internet of junk. Even if it’s not malicious or directly harmful to anyone, it still has the potential to affect people. There are some fundamental issues in creating a world where you never really know if any content you encounter is real or not. This is really the near future we are headed for. I need to give this some more thought to consider the full impacts at scale.
There are some fundamental issues in creating a world where you never really know if any content you encounter is real or not.
A Silver Lining
Will the deluge of nonsense have a positive effect? It’s possible. Consuming misinformation and other nonsense is consuming mental junk food. It feels good, but there’s no substance. Just like eating cake and ice cream for every meal seems fun, it’s not fun in practice.
When you are bombarded with things, you tend to check out. The mental junk food becomes less fun, and you stop interacting with it, possibly block it, or just leave social media for a while. So, it could have a positive impact. I realize I may be too hopeful, but it’s possible. I’m also aware of the arguments that say making people tune out is the point, but even given their argument, I don’t think it’s all bad on that side.
This is also precisely why legitimate news outlets shouldn’t use Generative AI to curate and write articles. This makes these news sources seem like part of the problem when the rest of the internet is filled with nonsense. The stakes are too high, and the value too low.
Conclusion
This post contained some food for thought, possibly going in the opposite direction of what may be reported. I could be completely wrong about all of this, and the tide of the election could very well turn based on AI-generated misinformation, but I don’t think so. Usually, I’d be happy to be wrong, but not in this case for obvious reasons.
There isn’t much we can do for the time being except employ critical thinking skills and evaluate content accordingly. The hype of 2024 is right around the corner. I do feel there are a couple of fundamental things we can be doing to prepare for a world in which reality is merely a suggestion. This involves teaching data literacy as well as probability and statistics in the K-12 curriculum. Making room for these subjects is vital to prepare students for not just the future but what we now have in the present.
4 thoughts on “Generative AI, Deepfakes, and Elections: Apocalypse or Dud?”