Since this site focuses on risks and trade-offs rather than shiny, utopian use cases, there is some confusion about my thoughts on AI. Be it the AI of the present or the AI of the future. With this post, I stake out my position on AI and its advancement so I don’t have to keep restating it. Also, it’s good to write down your beliefs and confront yourself with them. Sometimes what you think you believe isn’t what you actually believe. Maybe I’m a secret AI bro after all. Utopia, here we come!
AI acceleration has become a cult or religion, and offering any criticism of its advancement is taken as a personal attack. In many ways, if you swapped out AI for cryptocurrency, the theme would revive a familiar tone.
Even with this post, I’ll undoubtedly be pegged as an AI hater because I don’t have pictures of myself lying prostrate in front of a pile of GPUs. Any time you shed light on hype or bullshit, people are willing to label you a hater. That’s the easiest thing for them to do, and it takes no mental effort and requires no skill. With that said, here we go.
On AI Advancement Summary
I’ve been using this image in my conference presentations since 2023. The focus of my talks is on risk, which means talking about problems and challenges most of the time rather than amazing use cases. I wanted to show the audience that I don’t hate the technology.

The problem with being in the middle is that both sides typically frame you as an extremist. You don’t hate the technology enough for one side or love it enough for the other. Realities on the ground typically hover somewhere in the middle between extreme claims. This isn’t rocket science.
For those who honestly don’t care about the rest of this post and made it this far, here’s a quick summary:
- I’m not a skeptic, I’m a critic
- Yes, I think today’s AI can be useful
- Yes, there are some use cases that I’m hopeful for
- No, I don’t think today’s AI is AGI
- Yes, I think AGI is possible
- No, I don’t think LLMs will lead to AGI
- I think even AGI will have vulnerabilities
- I’m not so sure about the concept of ASI or the intelligence explosion
I’m Not a Skeptic, I’m a Critic
In the current era, I’ve seen people frame themselves as skeptics either of technology or of AI. This is not how I frame myself. I consider myself a critic, not a skeptic. I’ve joked that I’m a hype critic. Throughout my career, I’ve offered criticism of the state on cybersecurity, emerging technologies, and, especially, product manufacturers and their claims.
I believe that technology does need a better class of criticism. The tech press, in large part, has abdicated its responsibility, choosing instead to mindlessly parrot opinions from tech leaders. The entire world has been a gigantic sycophantic feedback loop. This is something I’ve called out many times myself and something that Karl Bode calls “CEO Said A Thing!” Journalism.
Most people criticizing the state of technology are insufferable. The few valuable points they present are wrapped in politics, bullshit, and, in some cases, conspiracy theories. Their goals aren’t to effect change but to pander to their audience. The very people who need to hear these points are the very ones who would never listen to them in the first place.
I have no vested interest in any technology’s success or failure, and I’m certainly not pandering to an audience. Hell, if I wanted to cultivate a large following, the last thing I’d be spending time on is writing. I’d start a podcast or YouTube channel, align my content with people’s biases, and go all out, telling them what they want to hear. I’d also use AI to write my content to up my pace. It’s the new definition of “productivity.”
One of the criticisms I get is that I shouldn’t be listened to because I don’t love AI enough, which is a strange perspective. Would you really trust someone who is selling you something, or who is completely head over heels in love with a technology? Are you going to get honest criticism? Of course you wouldn’t. But the whole premise of that argument doesn’t make sense. That’s a lot like saying someone isn’t religious enough because they don’t have enough religious bumper stickers on their car.

Framing The Conversation
Much of the debate around future AI advancement revolves around two questions after a claim is made:
- What specific technology is being discussed?
- When will it arrive?
So much confusion is caused by not clarifying the two follow-up questions. Many throw out the term “AI” as a catch-all, referring to any technology, present or future. For example, the claim that AI will cure cancer. Okay, but what specific AI technology? Is it technology we have today? Some future technology that hasn’t been invented yet? And of course, most importantly, when will this happen? The precision matters.
When asking people to provide some precision regarding their claims, it’s not uncommon to find that people aren’t talking about AI at all. They are talking about magic. For others, they are purely saying “something” will happen at “some point.” Which is basically saying nothing. Back in January of 2024, I published a framework for making sense of human AI predictions, which goes into a bit more detail on this topic.
I do believe that many of the claims made by proponents will be realized at some point through future technological advancements, some even with the technology we have today. I’m certainly hopeful about cures for debilitating illnesses, and a lot of work has already been done. I don’t think we are miles away from seeing those results. This is an example of something I’m hopeful for. Call me an optimist??? However, I don’t know what technology, under what circumstances, or when.
I’m sure so many people thought it was inevitable that by 2015, we’d have hoverboards in common use after watching Back To The Future 2. Our perspective on technological advancement is often skewed and off by a wide margin. It’s always good to keep this in mind.
LLMs
I certainly don’t hate LLMs. I find LLMs useful for various tasks, mostly coding tasks, basic research, and troubleshooting. I occasionally will use them to generate some AI slop images for a blog post or conference presentation. Pinning down my exact usage is a bit hard, since LLMs aren’t my first port of call for every problem. After all, I value my critical thinking skills, skills that people these days seem content to discard.
I never use LLMs for common cognitive tasks and never have an LLM decide for me or write anything on my behalf. I also never have an LLM summarize something I’m trying to understand, because knowledge and understanding aren’t generated from bullet points. The friction is the point in so many tasks where we look to reduce it.
The hype with LLMs hasn’t been commensurate with the realities on the ground. LLMs certainly have their uses. Just like me, people are finding them valuable for a variety of tasks. In my own industry (cybersecurity), there are positive examples in offensive security, vulnerability identification, and assisting analysts in security operations centers. You can also tune LLMs more effectively for specific tasks, which will have a positive effect. However, there are limiting factors to LLMs.
The first is the cost of failure for the use case. LLMs have relatively high failure rates, and when connected in agentic systems, these failures can cascade through the system. Failures compound like interest, to use the words of Demis Hassabis. I mean, the thought of ChatGPT running air traffic control is terrifying.
Second, they are highly manipulable. This is why everyone from startups to hyperscalers has had their AI-based applications hacked. This fundamental manipulability is baked into how LLMs operate. It’s why we have things like prompt injection, and adding LLMs to applications increases their attack surface. This condition is why I’ve described AI Security as a misnomer in the age of generative AI. You aren’t defending the AI. You are defending the application or use case against the effects of adding AI. This is a different problem.
These two factors can be misleading, though. We don’t need AGI-level capabilities for LLMs to be useful or to replace people in their jobs. The moment an LLM-based system is mediocre enough to replace someone, companies will rush to replace people. This is especially true if the cost of failure is lower for a particular job. Although reports of recent layoffs are nothing but AI washing, we are getting a glimpse of what will happen once capabilities arrive.
My biggest concern with LLMs isn’t what they can do for people, it’s what they do to people. I believe we are vastly underestimating the negative cognitive impacts these tools are having and will have on people in the future.
My biggest concern with LLMs isn’t what they can do for people, it’s what they do to people.
AGI
I do believe that AGI is possible, but I don’t think that today’s LLMs will be what gets us there. When do I believe AGI will arrive? 15 to 20 years. Put my precision on this at about 70%. I put the likelihood of LLMs alone becoming AGI at about 15%. But keep in mind, these are mostly guesses guided by intuition and actualities. Caveat: I’m not involved in developing AGI, and the world is a complex place that defies predictions. However, I do believe some factors will confound advancements for a while.
First, I don’t feel LLMs will lead to AGI, and this is where all of the focus seems to be at the moment. Second, I think there is a massive AI investment bubble. The amount of money being invested is nowhere near the value created. This bubble will pop at some point, hopefully not spectacularly. Companies like OpenAI will very likely go out of business. They are hemorrhaging money, and their shares are becoming almost impossible to unload on the secondary market. I mean, they put out a statement about focusing on business, and then just bought a podcast. Not exactly a shining indicator of future success.
I bring this up because this crash will cause some reluctance to invest in the future. Maybe it won’t quite be an AI winter, but it will be an AI fall with colder weather and a lot fewer leaves on the trees. This may stall the advancement toward AGI.
I think some people think that LLMs will go away after the investment bubble pops, but this is nothing but wishful thinking on their part. LLMs are genuinely useful for certain tasks and will continue to be. Also, LLMs are so essential to some people’s identity now, you’ll have to pry them from their cold, dead hands.
When it arrives, I do believe AGI will have vulnerabilities, even if they are not immediately apparent. This would be especially true if AGI were built on today’s LLMs or if it weren’t a single large system but a network of systems. Once deployed, we’d be stuck with these vulnerabilities. This is a perspective I’ve shared publicly for years in my talks and keynotes. There may be something about generalizing to the world that contains inherent vulnerabilities. I have a draft post on this topic that I’ll publish in the future. Unfortunately, I have dozens of posts in draft and only so much time.
ASI and The Intelligence Explosion
Strangely, we haven’t even achieved AGI yet, but labs are already bragging about how we are close to artificial superintelligence (ASI). Okay, it’s not strange, that’s just how hype works. We seem to forget that ASI is a speculative technology, and speculative technology leads to speculative bullshit.
To sum it up, despite believing that AGI is possible, I’m not so sure about ASI. Or at least ASI as it’s traditionally been discussed. I’m not quite sure I can put my finger on exactly why. It’s more of an intuition I have rather than any one specific thing. Of course, I may be the one now talking nonsense.
I think my hesitancy stems from conceptions of ASI, the resources required, and the plateaus that would be encountered. We are told we get there by just packing in “more” and “better”, whatever the more and better happen to be, and this cycle will continue forever. But I don’t think we’ll scale our way there, and we still have to contend with the laws of physics and resource constraints. This is why Ray Kurzweil thinks we need to pave over the universe to create computronium.
I do believe that some recursive self-improvement is possible, but only to a point. Maybe we’ll get to something like an AGI+ but not ASI as it’s traditionally been discussed, with its planet-eating power requirements and its continual recursive self-improvement. However, there is one thing I can say for sure: something will be labeled ASI long before it’s possible. Maybe someone will buy a podcast to promote that perspective! Who knows.
Since I’m unsure if ASI, as it’s been defined, is even possible, I’ll put the odds of reaching ASI in the next 50 years at 10%. But feel free to chalk this up to me saying, “I don’t know,” and disregard everything I’ve said.
What Happens Next
I’ve left no doubt about my pessimism about what happens next and how it’s not good for humanity. Much of the content on this site focuses on that topic. And, no, I don’t think a super-capable AI will see humans as a nuisance and eliminate us. Sorry, Eliezer Yudkowsky, but our manifested problems will be much more mundane.
It’s we, humans, who plant the seeds of our own downfall. When massive unemployment occurs (which may happen well before reaching AGI), there will be no recourse. The so-called abundance movement won’t deliver the value it promises. Many will fall into the “sucks to be you” gap that I’ve defined previously. A segment of the population will remain pinned there, possibly for a generation. This is purely due to incentives and the reluctance or inability to do anything about it. I’ll have more to say on this in the future.
Also, people continue to cede their critical thinking skills to AI. By far my biggest concern is the collapse of culture amid homogenized AI outputs and people’s inability to think independently. I see people more concerned with collecting data than with understanding it. The idea that someone becomes wiser by collecting more data or by engineering a better retrieval system is nonsense. If you need an AI to tell you what you think or believe, you’ve made a fundamental error.
In a previous post, I mentioned the story of Calvisius Sabinus, who, in an attempt to appear learned, devised a shortcut. It didn’t work out so well for him, and this new strategy won’t for us. I have much more to say on this topic as well. But that’s all for now.


