Perilous Tech

Risks at the Intersection of Technology and Humanity

As predicted last year, there is an increased trumpeting of risks focused on deepfakes and AI-generated misinformation surrounding the 2024 US election. Living up to the mantra of never letting a tragedy or potential tragedy go to waste, media organizations have latched on to the doomsday nature of AI-generated misinformation and its potential effects on global politics. I’ve maintained the risks from deepfakes and AI-generated misinformation on elections are overblown, and I’ve seen nothing so far that makes me believe the contrary. In cases of highly polarized topics, the application of AI to misinformation does nothing to change people’s minds.

Over the past couple of years, this has been a relatively lonely position as it seems (or at least seems to me) that I’ve been on an island about to draw a face on a volleyball. Even experts I agree with on other aspects of AI seem to be sounding the AI-generated misinformation alarm the loudest. But it seems the position isn’t so lonely anymore, as more people add their voices to the discourse. However, I’m still struck as to why so many people believe otherwise, but I think I’ve gotten closer to the core of why this remains a powerful illusion.

Why The Powerful Illusion Remains

There is certainly no shortage of perverse incentives around inflating the risks of AI-generated misinformation. For AI companies, this threat demonstrates the power of their technology; for some, it’s the perfect topic to illustrate the need for increased regulation, but for others, it is a potential business opportunity. Many of these situations can be identified easily, but that still leaves many people with this belief. I think I’ve gotten more to the core of why this is such a powerful illusion, and it comes down to something simple: we aren’t seeing ourselves in other people.

People can’t fathom how having a highly convincing image of something won’t fool people. The problem here is we don’t see ourselves in other people. I mean, are we dolts that believe absolutely everything we see? Of course not. But we assume that everyone else is a bunch of idiots who believe everything they see instantly. We see ourselves as outliers instead of the mean. This perspective also ignores the fact that the presence of fake information has been around us the whole time.

We see ourselves as outliers instead of the mean.

When people point to outlandish beliefs like QAnon as proof that this content would fool people, they are also fooling themselves. A fair number of QAnon adherents don’t believe in anything they share. They share it to troll the other side because it irritates people.

Government Perspective

You may be thinking, didn’t the US Justice Department make a big deal about disrupting a Russian AI-powered propaganda campaign? That alone must disprove your argument. Well…

A press release

The statement from the Attorney General is pretty strong, but the details matter. You can read the full press release here.

So, let’s read through. Hmm… okay, okay, okay, Spits out coffee WTF? You’re trying to convince people of the impact of these operations, and that’s the best you can come up with? Some sock puppet account with 23 followers?

A social media image

Okay, well, maybe this account with this few followers shared a viral video that had a major impact. So, how many thousands of views did his video get?

A social media image

Five? Wait, do you mean five… hundred thousand? No, five. I hate to point this out, but you have to assume at least one or more of those views came from the person monitoring the account. Maybe I’m being too harsh, and this was the setup for the big reveal, so show me the next one.

A social media image

Seven. Huh. Well, that won’t hit any type of vitality.

Sorry for the low-quality images. Apparently, the government doesn’t train people to take proper screenshots. It conjures images of some analyst pecking the keyboard with their index fingers. However, these examples fail to prove any point supporting the existential threat of AI-generated misinformation.

There is nothing here a human couldn’t do, so if GenAI didn’t exist, Russia would just put asses in seats. That’s what the Internet Research Agency (IRA) did in the past.

AI-Generated Misinformation’s Rough Time

AI-generated misinformation has actually had a rough time for a while. Last year, it was reported that Russia’s Doppelgänger group was struggling to find an audience. Ouch.

We are already in the era of generative AI and deepfakes, and we’ve had multiple high-visibility elections throughout the world. Still, the misinformation aspects of generative AI haven’t affected these elections. As a matter of fact, it unfolded like I said it would with generative AI used for memes, not misinformation. You’d think, with some evidence now, this narrative would let up, but quite the opposite. Many are doubling down.

This disconnect is most obvious with government types. Recently, the director of CISA warned of the risk of US adversaries causing “unimaginable harm to populations across the globe.” This was in reference to adversaries affecting elections. Really? Unimaginable harm, despite the fact that we have evidence to the contrary?

I believe, to some extent, this comes back to incentives. The misinformation topic seems like the perfect example to push for more regulation, and many refuse to take their foot off that gas.

The Reality

Influencing people through AI-generated misinformation is a much harder problem than people want to acknowledge. In a post I wrote last year called Generative AI, Deepfakes, and Elections: Apocalypse or Dud, I introduced something called the Generative Misinformation Cycle to demonstrate the phases and challenges.

A diagram demonstrating the generative misinformation cycle

With misinformation, your goal is to influence an outcome. This would be to change people’s minds or get them to take action. All of the other phases, such as generating misinformation and amplifying it on social media, only work for a shot at influencing the outcome. Yet it’s these relatively inconsequential activities that so many people focus on. This is where the confusion sets in. To people’s credit, these are the tangible things people can see and measure. It’s much harder to measure a changed mind. However, pointing to these relatively easy activities and saying that their presence indicates that an extremely difficult thing (influencing the outcome) will happen doesn’t match reality.

What AI brings to the table is assistance with the generation of content and some automation activities. That’s it. So sure, you can create a lot more of it and try to have your bots amplify it, but if a misinformation tree falls in the social media woods and only bots hear it, does it really make a sound?

If a misinformation tree falls in the social media woods and only bots hear it, does it really make a sound?

Sowing Confusion

This is about the time when people will mention using generative AI to sow confusion, but sowing confusion is a far cry from sowing influence. It’s not like when people mentally check out due to confusion that their brains somehow revert to an initialized state where they don’t have an opinion.

It’s not like when people mentally check out due to confusion that their brains somehow revert to an initialized state where they don’t have an opinion. 

Even when the technique does work, it’s only effective for unfolding current events and on topics where people aren’t emotionally invested, such as an unfolding global pandemic or geopolitical situation far away from home. Sure, this can cause some negative impacts, but if GenAI wasn’t around, bad actors would do this with humans. And, of course, once people’s strongly held beliefs get involved, the trenches are dug too deep to dislodge them.

Poor Research

Poor research also plays a role here. There is so much junk research on AI-generated misinformation. This research often focuses on the wrong questions combined with model capabilities. In the end, you end up answering positively, the wrong question. Take the example below that I shared in February.

A social media image

Apparently, in his “extensive research,” he missed the fact that concentration camps aren’t typically associated with US political parties. He basically confirmed that LLMs can say mean things. At this point, it should be well known that LLMs can be made to say mean things, and that is a reality we already have today, not in some future state. But this paper does nothing to answer the real question: does the fact that LLMs can say mean things have an impact on human political polarization? After reading this far, you should already know my perspective.

You should also recognize another common theme that this research misses and it’s that instances and capabilities don’t equal impact. I’ve covered this in my previous posts.

Instances don’t equal impact.

Since the Durably Reducing Conspiracy Beliefs Through Dialogs With AI paper is making the rounds again, I’ll point out that I wrote a whole article addressing the issues with that paper.

The Real Risks

There is no shortage of real risks surrounding generative AI. I’ve talked about this at length. I’m far more concerned with the Internet turning into a junkyard or how tech companies are shoving generative AI into every technological crevice imaginable than I am about a theoretical misinformation apocalypse. These activities have far more impact than any usage of generative AI to try and manipulate an election. However, there is also a risk of overly oppressive regulation.

Pretty much everything on the internet is manipulated. You could also say that it is technically misinformation. For example, applying a photo filter adds information to a photo that wasn’t there, and cropping a photo removes information. You could say the same about grammar checkers rephrasing sentences, document summarization, and a whole host of other tools people use on a daily basis. It gets blurry if you are only focused on the information manipulation aspects.

I’ve said all along that regulating underlying technology is a losing proposition. What should be regulated are use cases. This is because AI is a dual-use technology; therefore, the harm surfaces in the use case, not in the technology. It’s a tough problem, and I don’t envy the people trying to address it.

On the other hand, inadvertent misinformation and junk content cluttering the Internet is a real problem. For example, which of the two photos below is the real photo of the Matterhorn?

A fake photo of the Matterhorn
A fake photo of the Matterhorn

Surprise, neither of them. Now, if we take into account everyone’s AI-generated blog posts, news articles that don’t get checked, and a whole host of other content that doesn’t rise to the level of existential threat, we have a world cluttered with garbage.

Garbage like Popes in puffer jackets, fake dresses, AI-written nonsense books, and much more. It’s like taking a stroll through a junkyard instead of a pristine forest. Okay, bad analogy. The Internet has always been a sort of junkyard, but now, instead of strolling through rows of junked cars stacked on top of one another, there are junked cars mixed with heaps of household trash strung about, littering the walkway and stinking up the place. We haven’t reckoned with this yet.

Conclusion

This post has remained a semi-written draft since November 2023 because I always feel like I’ve said what I needed to say on the topic. However, I keep getting pulled back in. As a bonus, there are recent updates and more evidence, so my procrastination seems to have paid off.

Unfortunately, the claims of a coming misinformation apocalypse will be with us long after there is more proof to the contrary. Proponents think they’ve found their ultimate talking point to push regulation. Ultimately, this will be the story of the AI-generated misinformation apocalypse that wasn’t.

Travolta confused meme

One thought on “Illusion of Influence: The AI-Generated Misinformation Apocalypse That Wasn’t

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading