Democratizing AI Psychosis: Why Smart People Are Captured By AI Hype

Paper cutout of a human

For months now, I’ve been fascinated by seeing smart people completely captured by AI hype. The very people who should be pushing back against the hype are the most swept up in its rapture. But it’s starting to make sense to me. I believe I’ve pinpointed a few key features driving this phenomenon. As is the case whenever smart people get caught up in things, they don’t do things halfway.

In a larger context, we may be witnessing a glimpse of what critical thinking’s death might look like as the impacts of cognitive offloading become more widespread, with the technology’s numbing effects defying our ability to recognize them. These effects can create a democratization of AI psychosis. Time to get the shades, because it’s all vibes now.

Note: In this post, I admittedly do a bad job of defining “smart” people (I don’t even try) and the attributes that differentiate excitement from being captured by hype (I try). I realize that this makes things very subjective, but the goal here isn’t to apply definitions to specific people. It’s about highlighting the attributes that contribute to the condition.

Table of Contents:

Democratizing AI Psychosis

By now, most people have heard of AI psychosis. This is a term we typically associate with extreme cases, but the same features that create more extreme instances of psychosis are present in the regular usage of AI tools. Although it may not trigger extreme psychosis in most people, it does induce lesser delusions in some, leading to a warped worldview. It’s these lesser delusions we cover in this post.

AI hype, when manufactured by continuous AI usage, becomes an artifact of AI psychosis. This isn’t as extreme as the cases you’ve read about in news articles, but it creates delusions nonetheless and is fueled by some of the very same attributes.

AI hype, when manufactured by continuous AI usage, becomes an artifact of AI psychosis.

Years ago, I sat through a presentation on human manipulation by an expert in cults. He mentioned that when smart people got caught up in cults, they were the most effective members. They’d fully committed and had a way of rationalizing misgivings. They also made the best cases to attract new members through their devotion. It was also damn near impossible to get them to change their minds. This always stuck with me.

Smart people certainly have more faculties to resist being sucked into cults or, more broadly, to resist hype. I believe what caught many smart people off guard was due to the erosion of our cognitive defenses, as well as the packaging of AI as “just another tool.”

I noticed the phenomenon of smart people and AI hype ramping up in late 2025, with full acceleration in 2026. When someone laid out a scenario for using AI for a task or use case, I often found myself saying, “I can’t tell if you’re joking or serious.” To which the reply of awkward laughter or an “lol” would result, depending on the communication medium. But my favorite is when people would lay out scenarios where they had a task to do, then brag that the AI outperformed them.

Side eye meme image demonstrating the democratizing AI psychosis

I’ve been writing about the cognitive effects of AI for a few years now, and the speed at which these impacts arrived caught me off guard. I didn’t expect we’d see these effects so soon. There’s something about the generalized nature of generative AI and its increased use that has accelerated negative cognitive effects. I’m certainly not the only one who’s noticing this.

So, what’s the difference between finding AI useful and being captured by AI hype? Many people (including myself) are finding today’s AI useful and even believe it can be disruptive in certain areas more than others, but disruptive nonetheless. Believing this doesn’t necessarily mean someone is captured by AI hype. For anyone confused about my perspective or who thinks I’m an AI hater, please see my post here.

A few characteristics of being captured by AI hype may be starting with AI and working backward to find problems, perpetually believing the next version will unlock the true value, jumps immediately to catastrophizing, describing AI in terms of revolution versus specific task outcomes, being blown away by outputs despite the issues, discounting complexities, treating outputs as authoritative and delegating judgment to AI, extrapolating to futuristic predictions, and on and on.

Admittedly, I haven’t done a great job of distinguishing between excitement about AI and being captured by hype. Mainly because it’s something that you know when you see it. And trust me, someone who’s captured by AI hype is more than happy to tell you about it.

Two groups of users are the most likely to be caught up in AI hype. These are AI power users and people with little AI experience. People with little AI experience are the ones who merely parrot others’ opinions, and we won’t focus on them here. Power users, on the other hand, are the most susceptible due to the amount of cognitive offloading and constant interactions with AI tools. They assume they are “witnessing” a revolution that others simply don’t see.

The being captured by hype scenarios is clearly evident in the wake of the Claude Mythos announcement and project GlassWing. I wrote this article before this announcement, but it proves a solid example.

GlassWing Example

The number of cybersecurity people genuinely depressed over the Mythos and Glasswing announcement is strange. Everywhere I turn, speculation is rampant, with countless people claiming this is the end of cybersecurity.

Article saying the beginning of the end of cybersecurity

The other claim is that we are on the verge of a Vulnpocalypse, where Heartbleed-style vulnerabilities occur every week. This is speculation devoid of critical thinking. The realities are far more mundane.

I remember when cybersecurity people were more skeptical. We used to make vendors prove their claims before we took them at face value. We would do our own evaluation and see the results for ourselves, but that’s not the environment we are in. Now people are falling all over themselves to be the marketing arm for these companies.

For more details on this topic, see my reasoned take on Mythos and Glasswing for the ModernCISO.

I admit, I may be totally wrong, and all the speculation may be true. Maybe cybersecurity is about to be solved. It’s certainly not impossible, nor is the prospect of a Vulnpocalypse, but it’s not likely. The advancements are more likely a step improvement than an exponential one. There are plenty of problems to go around, and the world is a complex place. So, let me make a prediction: cybersecurity isn’t about to be solved. At least, not anytime soon.

Erosion of Defenses

We need to reclaim the ability to keep two thoughts in our heads at once. The fact that AI can be incredibly useful and simultaneously overhyped. This is difficult in the current era, which has deteriorated our defenses.

We need to reclaim the ability to keep two thoughts in our heads at once. The fact that AI can be incredibly useful and simultaneously overhyped.

I believe that three things have contributed to the erosion of our cognitive defenses.

  • The shift to a post-literate culture
  • The effects of modern communication technologies
  • The destruction of our attention.

These cultural changes are leaving us defenseless in the age of hype and doom. The craziest thing is that people don’t realize this is happening to them. Marshall McLuhan stated that every augmentation is a self-amputation, creating a numbing effect that eludes recognition. We are witnessing this play out in real-time.

McLuhan quote about self-amputation forbidding self-recognition.

I break down what’s causing this capture into a few categories. Some may affect certain people more than others, but a combination of all of these factors is what’s driving smart people to be captured by AI hype.

  • Local bias
  • Information Bubbles
  • Dark flow
  • Overconfidence
  • Playing Around
  • Warped Rewards

Local Bias and Blowing Yourself Away

This is one of the earliest factors of AI hype. Back in 2023, at various conferences and events, I described the massive uptick in hype more broadly as people being bad at constructing tests and good at filling in the blanks. This leads people to blow themselves away with their experiments. So, when people asked ChatGPT for a recipe in the style of Shakespeare and received it, they were so blown away that they claimed LLMs will be more impactful on humanity than the printing press. What we are seeing today is just a more advanced version of this.

In my completely unscientific observation, I seem to have isolated the rise in smart people getting captured by AI hype to the uptick in Claude Code usage. Many underestimate the extent to which people are losing their minds over Claude Code. In some cases, their usage is fueling delusions. There are people publishing markdown files, thinking that they are changing the world or revolutionizing business. We are led to believe that markdown files will create the first billion-dollar solopreneur.

People are also blown away by other people being blown away. Every day, it seems people are happy to share that a family member, significant other, parent, or anyone without technical skills was able to generate something. Mind blown. 🤯

These people then carry this perspective forward into all sorts of predictions about business and the world. This ends up in perspectives like the SaaSpocalypse, the SOCpocalypse, and the idea that AI is eating, destroying, and reducing to rubble “x” industry. All of this demonstrates a lack of awareness of how the world actually works, as well as the discounting of the massive complexity involved. Aspects these smart people used to recognize, but the results of their experiments have caused them to suspend disbelief in much the same way as watching Matt Damon successfully survive on Mars.

We’ve completely lost our ability to reflect because anyone who reflects on these topics would see these obvious issues. For a further breakdown, I’ve covered these issues in relation to the SaaSpocalypse in The Death of Software is Greatly Exaggerated.

I can already hear the response now, “But the software works!” Of course, the software works. If it didn’t work, it wouldn’t warp perspectives. Functional software in small experiments isn’t the point. Even the fact that people find the applications they built useful isn’t the point. The point is the lack of awareness of what this actually means in the grand scheme of things, and of how insignificant an individual’s experiments and one-off applications are to the world as a whole.

To extrapolate a tiny experiment out into the perspectives that companies in the future won’t buy software because they’ll just build it themselves on the fly, or to think that companies won’t have employees in the near future, is where the delusion enters.

We have something that resembles a software self-esteem movement. Everyone is told that an idea and some vibe coding are all they need to make millions of dollars. Is it impossible? Of course not. However, is this something likely to scale? Absolutely not. Remember, exceptions will always be pointed to as the rule. People win the lottery, too.

We have something that resembles a software self-esteem movement.

We have smart people who now believe that code is the only thing that matters at a company. Or even that code is the hard part at the company, and if the code is right, the rest will fall into place, never mind the use case or problem to be solved in the first place.

This condition reminds me of people who stated that global warming couldn’t be real because it was cold where they were at that moment. Regardless of anyone’s perspectives on climate, the reasoning behind the response is silly. First of all, it mistakes weather for climate. Second, it assumes the effects are equal and stable across geographic locations. Reality wouldn’t change the very real perception of the person who was cold that day, just like it won’t change a person with the successful Claude Code experiment. In many ways, vibe coding and major economic predictions completely align with our attention-poor environment.

This condition reminds me of people who stated that global warming couldn’t be real because it was cold where they were at that moment.

People are outsourcing their entire thought process and even memories to these tools. Only someone laboring under a delusion would think this would end well for them. To a certain extent, this may come down to the feeling of productivity. I’ve written about this illusion of productivity before, both here and here.

Being productive means more than just doing stuff or doing more stuff. Someone can vibe code for an entire weekend and write more code than they’ve ever written, but it doesn’t mean they were productive. But somehow, the feeling of doing more has counteracted the critical ability to evaluate productivity.

Information Bubbles

Information bubbles are an effect of modern communication technologies. These can be traditional filter bubbles from social media, as well as bubbles that people create themselves in private chat groups on platforms like Discord.

People are encasing themselves in these bubbles, planning to burst forth like butterflies from cocoons as billion-dollar solopreneurs. Except they burst forth into a complex world that doesn’t resemble the simplistic one they created. Social media filter bubbles certainly play a role, but the bubbles people proactively choose to enter may have a greater effect.

There are private chat groups where members jazz each other up. Quite often, they aren’t exposed to contradictory information and perspectives. When contradictory evidence makes it into the bubble, they explain it away as a group.

Being in an information bubble doesn’t automatically make someone wrong, but it significantly increases the likelihood that they are. People in bubbles are often surprised when things they believed turn out to be wrong, but they often reframe the evidence and their perspective to claim they were right all along. I know, welcome to the Internet.

Overconfidence

Overconfidence is a foregone conclusion in the age of AI. It doesn’t matter how smart you are. Overconfidence is one of the inevitable byproducts of the cognitive illusions created by the personas of personal AI.

I remember reading a paper a couple of years ago in which researchers showed participants a trivially informative video of a pilot landing a plane, inflating participants’ confidence that they could do the same. This is Dunning-Kruger in full effect.

Frank Landymore had a great line in one of his articles. He said AI was democratizing the Dunning-Kruger effect. Which is one of those lines you hate yourself for not coming up with first, but it really does summarize what we are seeing in the AI era.

This effect was obviously going to be a foundational aspect of AI usage. And we are seeing people overestimate their abilities when using AI. But this isn’t constrained to having confidence in the presence of a tool. It’s the tool’s psychological effects outside of its usage as well.

Addiction and Dark Flow

We often underestimate the addictive nature of AI tools. When people think of tools like Claude Code, many things spring to mind. Addiction is probably not one of them, but this is something I’ve witnessed myself. It’s now common to hear of people not sleeping and not eating, binging on all-night coding sessions with AI tools. The FOMO is real, but what they are building is not.

Slot machine memes related to vibe coding have been around for a while now.

Vibe coding slot machine meme.

Fascinatingly enough, the comparison between slot machines and technology dates back to the 1950s. Jacques Ellul made this very same analogy back in 1954, and it fits right into the current conversation. Ellul was commenting on how humans participate less and less in technological creation, reduced to a catalyst. He went on to say, “Better still, he resembles a slug inserted into a slot machine: He starts the operation without participating.”

Ellul points out the true lack of human participation in the process, but the addition of gambling takes this to another level.

In her excellent article on dark flow, Rachel Thomas from fast.ai makes some key points relating these issues to AI coding tools.

The first is loss disguised as a win. The article discusses this in the context of a multi-line slot machine, stating that:

On a traditional slot machine, you either win or lose. In contrast, multiline slot machines have 20 rows going at once and reward partial “credits” that create a false sense of winning even as you lose. For example, you can gamble 20 cents and receive a 15 cent “credit”. This is actually a 5 cent loss, yet the slot machine plays celebratory noises that trigger a positive dopamine reaction.

This same condition happens with vibe coding and requires subsequent pulls of the one-armed bandit. The signals are just as misleading, too, as it may not be apparent for quite some time whether the code produced is actually any good.

Second, Thomas points out that “With ‘junk’ (or ‘dark’) flow we lose our ability to accurately assess our productivity levels and the quality of our work.” This condition contributes to the other categories we’ve discussed, mainly, blowing yourself away with your experiments.

Thomas goes on to state that vibe coding often violates the same characteristics of flow that fail with gambling, with three points:

  • Vibe coding does not provide clear clues of how well one is performing (and even provides misleading losses disguised as wins).
  • The match between challenge level and skill level is murky.
  • It provides a false sense of control in which people think they are influencing outcomes more than they are.

This final point aligns well with the one Ellul made in the 1950s, aligning with a misconception of agency. The article contains many more points and is a must-read.

Playing Around

At a recent conference, in reference to OpenClaw and Moltbook, I said that what we were seeing was just people playing around with toys, and that I wouldn’t sit around watching people play with Legos either.

There are millions of people running OpenClaw. What are they actually doing with it? Who knows. They are just playing around. OpenClaw, like many agents of its type, has no killer use case, so you end up with people doing things to do things, like hooking up their email or price-checking items. All things they do because they built the system to do it, not because it actually solved a problem. It’s Maslow’s Hammer, enhanced by a strong emotional attachment to the hammer.

It’s Maslow’s Hammer, enhanced by a strong emotional attachment to the hammer.

This “game” aspect isn’t lost on some people.

Building instead of games

This scenario isn’t necessarily bad as long as you recognize what it is. Unfortunately, many people follow this path to a delusion. They assume that what they are playing with will change the world, have some massive external effect, or make them rich. Instead, we get code for the sake of code.

They also assume that the same iota of satisfaction they have will scale equally across people, and they mistakenly believe that building and producing code are measures of being “productive.” As I previously mentioned. Measuring productivity by lines of code has become a technological fallacy.

Burning tokens on what

Where is all this code? Where are all the new killer applications? So many commits, so little effect. In a vast majority of cases, it’s like chucking pennies into a digital wishing well, only instead of pennies, it’s thousands upon thousands of dollars in tokens. This aligns with what I’ve dubbed the Slop Architecture and with the misconception that “ideas” are what’s truly important in any of these scenarios.

It’s like chucking pennies into a digital wishing well, only instead of pennies, it’s thousands upon thousands of dollars in tokens.

There are and will certainly be exceptions. This isn’t the point. The point is that people will cite exceptions and claim they’re the rule.

In a certain sense, this whole thing is a game or at least gamified. The playing around, generating code, and then talking about it publicly has the feeling of everyone playing a gigantic MMORPG. It’s like watching people talking about playing Warcraft all day and discussing their campaigns. Only, it’s much more isolating than Warcraft. It’s just you and a sycophantic non-human entity.

In reality, you are engaged far more in playing a game than you are in vibe coding. As Ellul points out, you are just a catalyst.

To be fair, playing around is an essential part of learning. When it comes to new technologies, arguably, it’s the most essential part. The problem here isn’t the playing around, it’s the accompanying delusion. Nobody playing Guitar Hero thinks, “Wow, I’m Steve Vai now! Let me book my world tour.” But add AI, and it’s all vibes now.

Warped Rewards

Simply put, smart people want to be seen as being ahead of the curve. This is a powerful intoxicant and shouldn’t be underestimated. They want to point back to things and say, “See. I was right!”

Due to their successful experiments and the mountain of positive press, they feel they know which way the wind is blowing, so they attempt to move to the head of the line. Critics like myself, on the other hand, are left feeling like Diogenes walking into a theater.

This is also a result of the warped reward systems of the modern communication environment. We often reward people for being bold, not for being right. We also reward people for posting hot takes and being reactive instead of reflective.

Conclusion

We need to recapture the ability to keep more than one thought in our heads at the same time. There’s no doubt that AI can and will be disruptive, yet it can also be overhyped. In five years, will the landscape change? Sure. Things are moving fast, and we’ll have to be adaptable. But, will it be completely unrecognizable? I doubt it.

AI is all about trade-offs, and we need to be mindful that outsourcing so much of our cognitive processing to AI tools can have far-reaching negative impacts. This is something that many are unprepared for today due to the erosion of defenses and the inability to recognize the conditions.

This whole article is about recognizing these conditions. It’s not that vibe coding is bad, or any of the conditions outlined are automatically bad. It’s when we don’t recognize what they are and allow them to warp our perceptions of reality that things get bad. Unfortunately, we are very bad at recognition. Welcome to the democratization of AI psychosis.


Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading