Following the pace of AI advancement can make you feel like the Blown Away Guy from the old Maxell commercials. Tech leaders and influencers tell us to expect artificial superintelligence in the next year or so, even doubling down on inevitability by moving up their timelines. The world is cooked. This perceived inevitability, combined with uncertainty, is leaving many people on edge, and rightly so. If all the things the tech bros hope for come true, humanity is in a terrible spot.
All is not lost. The over-the-top predictions we are bombarded with daily often equate to nothing more than performance art. The tech media frequently parrots perspectives from people who have a vested interest in selling us stuff. I mean, after all, why would they embellish or lie!???
In early April, the AI 2027 thingy was making the rounds. For those unfamiliar, you are in for a treat. The result answers what would happen if you locked a few tech bros in a conference room for a day, depriving them of any reality and oxygen.
Are the scenarios outlined in AI 2027 impossible? Certainly not. This sort of fast takeoff scenario is possible, but it’s highly unlikely. I predict the whole AI 2027 thing will start looking pretty silly in late 2025 or early 2026.
With all this endless AI advancement hype, I was happy to see a new article by Arvind Narayanan & Sayash Kapoor titled AI as Normal Technology. This article doesn’t talk about how AI displaces the human workforce or about a super-intelligent AI taking over the world, but rather about how AI becomes a normal technology that blends into the background of our daily lives.
They also touch on a few other topics, such as overregulation. I also believe that any regulation should be specific and targeted at use cases, not painting with broad strokes. This specificity wouldn’t allow for regulatory capture or weaponization of the regulations. The tech leaders are right that regulation can stifle innovation. By targeting regulations in this way, we can protect people without stifling innovation.
It’s a good read that’s well thought out and researched. For anyone mainlining AI hype, this is an essential read. The scenarios in the AI as Normal Technology article are far more likely than the AI 2027 one, by far.
Questioning The One True Faith
Starting in early 2023, I added a slide with the following image to my presentations. This is because any criticism of advancement was seen as an affront to a spiritual belief, and since I didn’t believe that LLMs would lead to AGI or ASI, I must hate the technology outright. This couldn’t be further from the truth.

Saying that LLMs won’t become ASI isn’t a blasphemy that requires self-flagellation afterward. We don’t need AGI or ASI for these tools to be effective. We can and are using them to solve problems today. People are using them to augment their jobs today. So, why turn AI beliefs into a religion? People are acting like questioning any part of the narrative makes someone a non-believer or some disconnected fool. The reality is that not questioning the narrative or exercising any skepticism is what makes someone a fool. A gullible fool at that.
The reality is that not questioning the narrative or exercising any skepticism is what makes someone a fool.
There’s a strange group that thinks belief is required for AI to create a utopia, but the reality is that facts don’t require belief. It’s ancient wisdom from five minutes ago that we’ve seemed to have forgotten in the vibes era.
I believe what we encounter here is a problem in perception caused by both our environment and us.
Environment
In the book Nexus by Yuval Noah Harari he describes the witch hunts as a prime example of a problem that was created by information, and was made worse by more information. For example, people may have doubted the existence of witches, having not seen any evidence of witchcraft, but the sheer amount of information circulating about witches made their existence hard to doubt. We are in a similar situation today with beliefs in AI advancement. This is made worse because the systems we use today reduce the friction in information sharing, making it much easier to get flooded with all sorts of information, especially digital witches.
We humans also gravitate toward information that is more novel and exciting. It’s the reason why clickbait works. However, novel and exciting information often doesn’t correlate with the truth or reality. As Aldous Huxley pointed out in Brave New World Revisited, “An unexciting truth may be eclipsed by a thrilling falsehood.” We are in this situation again. The vision of near-term artificial superintelligence is exciting and novel, even when people talk about it destroying humanity. AI, thought of as normal technology, as Narayanan and Kapoor put it, is boring by contrast, despite being more realistic.
This condition was the same back in the times of the witch hunts as well. The belief that witches were roaming the countryside looking to corrupt everyone, meaning you had to use your wits and your faith to defend yourself is a lot more novel and exciting than acknowledging that life really sucks because of the lack of food and indoor plumbing.
But then, there’s another strange type of information we gravitate towards: people telling us what we want to hear.

Ah, yes. Evals as taste. Vibes above all. Skills inessential.
We have allowed the people selling us stuff to set the tone for the conversation on the future. These people have a vested interest in selling us on a certain perspective. It’s like taking advice on a car’s performance and long-term viability directly from the mouth of the car salesman instead of objective reality. I wrote about this last year, saying that many absurd predictions were nothing more than performance art for investors. The tech media needs to step up and start asking some real questions.
Many of the influencers and people on social media are parroting the same perspective as the people selling us stuff because of audience capture. Audience capture, for those unfamiliar, is the phenomenon where an influencer is affected by their audience, catering to it with what they believe it wants to hear. This creates a positive feedback loop, leading the influencer to express more extreme views and behaviors. People get more likes and clicks by telling people more exciting things, as Huxley mentioned. So, there’s a perverse incentive for doing so.
Lack of Reflection
One of my biggest concerns is that we’ve lost our ability to reflect. Many things we believe are silly upon reflection. Unfortunately, our current information environment conditions us to reward reaction over reflection. Until we address this lack of reflection, we’ll continue to be fooled in many contexts, not least of which is the pace of AI advancement.
Benchmarks
Many of the benchmarks that people use for AI are not useful in real-world scenarios. This is because the world is a complicated place. Benchmarks are often not very useful in real-world scenarios due to additional complexities and edge and corner cases that arise in real-world use. Even small error rates can have significant consequences. But don’t take my word for it, take it from Demis Hassabis. “If your AI model has a 1% error rate and you plan over 5,000 steps, that 1% compounds like compound interest.” All of this adds up to much more work, not superintelligence next year.
Us
Fooling Ourselves
We have a tendency to fool ourselves easily. As I’ve said many times, we are very bad at constructing tests and very good at filling in the blanks. The tests we create for these systems end up being overly simplistic. Early on, people tested model capabilities by asking for recipes in the style of Shakespeare. Hardly a difficult test, and easily impressive.
This condition is also why every time a new model is released, it appears immediately impressive, followed by a drop-off when reality hits. Sometimes, this has increased problems, such as OpenAI’s o3 and o4-mini models hallucinating at a higher rate than previous models.
We are also easily fooled by demos. Not realizing that these things can be staged or, at the very least, conducted under highly controlled conditions. In these cases, variables can be easily controlled, unlike deployment in the real world.
Oversimplification
We humans tend to oversimplify everything. After all, almost half of the men surveyed thought they could land a passenger plane in an emergency. This oversimplification leads us to underestimate the jobs that others do, possibly seeing them as a task or two. So, when ChatGPT passes the bar exam, we assume that lawyers’ days are numbered.

This oversimplification is also exploited by companies trying to push their wares. This claim is more absurd performance art. No, there will not be any meaningful replacement of employees next year due to AI. The reality is that most jobs aren’t a task or two but collections of tasks. Most single-task jobs have already been automated. It’s why we don’t see elevator operator as a current career choice.
Being Seen as Experts
Many people seek content to share to be seen as experts. If you don’t believe me, have you logged in to LinkedIn lately? This adds to the massive amounts of noise on social media platforms. However, it’s often just parroting others.
This also extends to the tech media. I wish these people would start adding a modicum of skepticism and asking these people hard questions instead of writing articles about model welfare and how we should treat AI models. But once again, novelty over reality.
Conclusion
We are witnessing people attempting to shape a future with vibes and hype. This is the opposite of evidence. It certainly doesn’t mean their future vision is wrong, but it sure as hell means it’s a lot less likely to happen. Reality is a lot more boring than dystopian sci-fi.
I do believe that these tools can be disruptive in certain situations. If we are being honest, I feel much of the disruption is happening in all the wrong areas: creative arts, entertainment, music, etc. We’ve already seen these tools disrupt freelance marketing and copywriting jobs. These areas are disrupted because the cost of failure is low. There will even be niches carved out in more traditional work, too. So, even without AGI and ASI, disruption can still happen.
However, the predictions made over the past few years have been silly and absurd. If you believed many of the people peddling these views, we should be exploring universal basic income right now due to all of the job displacement by AI. But that’s certainly not the case. Many of these same people resemble doomsday cult leaders preaching the end of the world on a specific date, only to move the date into the future because of a digital divine intervention. The reality is, this is vibe misalignment. This is not only going to continue, but increase before it levels out, because investors don’t invest in normal or boring.
Let’s all take a breath, reflect, and maintain our sanity.