Social media is flooded with the same hot take: software is dead! Yup, that’s right, the world runs on software, but applications are either in the grave or the ICU with the cardiac monitor flatlining. It only takes a modicum of reflection to see through this illusion. But our modern world rewards reaction, not reflection, so everyone reacts. This is fueled by the fact that many tech journalists have abdicated their responsibility, leaving us with a world where people are consuming the equivalent of digital bath salts. Are we witnessing the death of software? Let’s find out.
Everyone Is Saying Software Is Dead
The new hotness to spout the phrase “software is dead.” Everyone is doing it. If you close your eyes and pretend to live in a fantasy world with unicorns and sorcerers, it almost makes sense. Unfortunately, in our modern world, a basis in reality is not a prerequisite for making an impact.
Over a week ago, the stock market began taking a major haircut on software stocks, with a one-day loss of 285 billion. This was dubbed the SaaSpocalpyse. It seems investors aren’t sure whether software products will exist in the future, and the AI bros are hyped. Honestly, when are the AI bros not hyped? Investors are convinced that, in the future, people will build their own software rather than purchase it. So long, SAP and Salesforce! You had a good run. If this were true, it would be a major shift, since the world runs on software. But as usual, this is mostly stoked by cluelessness and perverse incentives.
Here is the creator of OpenClaw saying that 80% of apps will disappear.

That’s right: reach for number, pull directly out of ass. His reasoning is fascinating, since he recycles the same tired examples we’ve heard for years: making a restaurant reservation. Which I’m pretty sure we have the technology to do today. Seriously, the guy built this viral agent with claims of transforming the world, and dinner reservations are the best he’s got? However, I do like the dystopian twist of having your agent get a human to stand in line for you.
Sam Altman thinks this guy is a genius. Just goes to show you how absolutely desperate OpenAI is. That Anthropic Super Bowl commercial really hurt him.

Not to be outdone, here’s Mustafa Suleyman pivoting this into an AGI prediction. Just when you thought we were done with the term AGI for a while, it’s back stronger than ever. He’s predicting “professional-grade AGI” in the next 12 to 18 months.

Hmmm. Why are these predictions always 12 to 18 months? It’s always 12 to 18 months because that’s long enough to generate hype that fuels investment and cannot be checked in the short term. It’s also long enough for people to forget the prediction.
At this point, it’s fair to assume the tech press has abdicated all responsibility. They just mindlessly parrot this nonsense without any questioning or due diligence. They repeat these statements knowing full well that hype is in their best interest.
And then there are all of the countless attention-mongering influencers selling their own unique brand of horseshit. Like this guy.

By the way, these people follow a familiar pattern to manipulate viewers. First, they make some dumb look on their faces with a clickbait headline, which psychology says increases the likelihood that people will click. Then they lay the foundation by stating some history or facts. By stating these up front, they lower your defenses and critical thinking skills, since the facts and foundation appear to give them credibility. This is followed by their own unique brand of nonsense that follows.
However, it is funny to consider that maybe the thing Anthropic actually broke was ClaudeCode, right after their latest funding round.

Here’s another rando agreeing with… checks notes… Mark Cuban? Well, both Mark Cuban and this person are dead wrong. The next decade belongs to security professionals because this technology is insanely insecure. More on this later. However, I do love the pitch that this dude is going to save your business with a single Mac mini and OpenClaw. Bold.
These claims are nothing but a combination of clueless ramblings and pure unadulterated bullshit. This doesn’t bode well if your goal is to align with reality.
The Software Environment
Let’s first define what we mean by “software” in this context. By software, we mean software that you purchase from a vendor or a SaaS (Software as a Service) solution you subscribe to. This could be everything from simple apps you purchase on the App Store to large enterprise applications like SAP.
So, what’s the claim? In short, the claim is that people and companies will stop buying software because they can just use AI to build it themselves.
There’s no doubt that tools like ClaudeCode and Codex are getting quite good. Many people are discovering software for the first time, writing what could be described as more elaborate examples of “Hello, World!” programs. Some may claim this is a disingenuous comparison because “Hello, World!” Programs merely print the words, Hello, World! and some of the things that people are building actually perform some task or tasks. Fair enough.
I’d argue that these still represent Hello, World! applications because the people developing them have little understanding of the language and mechanics. The difference from a simple Hello World is that nobody writing one of these simple programs would say they were an expert in the language because they wrote one. However, now we have Hello, World! applications powered by Dunning–Kruger.
However, now we have Hello, World! applications powered by Dunning–Kruger.
The ease with which these tools create apparently working code has fooled many people. I mean, here are some folks from CNBC who know nothing about programming blowing their own minds.

The fact that they don’t know anything about software engineering is precisely the point. It’s the same kind of leap people made when they asked ChatGPT for a recipe in the style of Shakespeare and said that LLMs were more impactful on humanity than the printing press.
But to avoid any confusion, let me acknowledge a couple of things. One-off software and scripts can be incredibly useful. Also, experienced developers are finding LLMs useful in their development process too. So, I’m not claiming tools like ClaudeCode or Codex are useless or have no value in the software development lifecycle. I’m not even claiming that vibe coding is useless, especially for rapid prototyping. My point is that reality still exists, and reality is what’s constraining in this context. Much of what we are seeing is people just playing with toys.
Much of what we are seeing is people just playing with toys.
When it comes to individuals building their own software for personal use, I think tech people are in a bit of a bubble. For example, here is a statement I read from Andrej Karpathy this morning.
TLDR the "app store" of a set of discrete apps that you choose from is an increasingly outdated concept all by itself. The future are services of AI-native sensors & actuators orchestrated via LLM glue into highly custom, ephemeral apps. It's just not here yet.
Having a world of composable pieces scattered across the digital landscape, requiring users to connect and use them, is not the dream for end users. They don’t want ephemeral software that they have to construct themselves and then figure out how to host or run. They just want software that just works. Some people enjoy tinkering with software, while most people don’t. Just like some people enjoy tinkering with cars and changing their own oil, while most people don’t. The same could be said of IKEA furniture. However, at least with IKEA furniture, you get directions, not “Here’s a bunch of stuff, you figure it out.”
The Death of Software?
Given this, will software disappear? Of course not. There are many reasons for this, and it takes only a moment of reflection to surface. First of all, nobody is going to vibe code or gen smash Salesforce or SAP. This is true no matter how good the tools become. Development requires much more than just a UI, some simple functionality, and a few prayers.
Software engineering is a lot more than simply writing code. There is architectural work, debugging, feature enhancements, improvements, hosting, and more. There is also the human aspect of translating users’ requests into real features that meet their needs. It’s more than just copying what someone else did. Not to mention, there is value in incorporating other people’s inputs into a product. Other people from other companies, which you wouldn’t get by developing internally.
But ultimately, how much effort is someone willing to expend to save $10 a month? Are you really saving $10 a month in the end? Say it takes you $1000 to vibe code the application and a week of time squashing bugs. Now, you feel like you’ve begun to save money. Even if that were the end of the story, it would still take years to recoup your costs, and you’ve created a bunch of technical debt on day one that nobody is focused on fixing. While development and features continue to be added to the app you were using, they aren’t added to the application you created.
The impression that software is built and forgotten, like one-off applications, is a myth. This is especially true for enterprise applications. There is an ongoing process for updates and maintenance. Many have no idea how complex this becomes when no one knows what’s happening inside an application. Like what happens when you use AI to build the apps. This gets even more complex when the app itself also uses AI as part of its functionality. Creating conditions where nobody knows what code is going to execute at runtime. I’ve pointed this condition out before.
There’s no evidence that these tools can create robust applications over time as feature, functionality, and bug-fix needs arise. Imagine waking up one day to the enterprise applications you count on to make money not working, you don’t know why, and your human team doesn’t know why, and your AI agent doesn’t know why. All so you could save $10 a month.
There’s also a bit of a mirage here that software disappears with new workflows, when the opposite happens. Let’s take the OpenClaw dude and his example of using your agent to book you a reservation at a restaurant. You use your agent, but your agent may use a service like OpenTable to book a reservation for you. This doesn’t remove OpenTable; instead, OpenTable becomes middleware. In many of these cases, old applications become middleware and remain in place. So, more code, not less.
In many of these cases, old applications become middleware and remain in place.
For many, the issues I’m calling out are obvious. But here’s something not so obvious. Companies can’t operate properly when everyone has conflicting insights from the same data. This creates disorganization and leads to poor business decisions. When everyone is building their own apps, there’s a risk that the same data is interpreted in conflicting ways.
As far as success goes, on a small scale, with simple applications, it’s very possible that people could build their own applications with AI tools. Let’s consider the humble Pomodoro Timer. Building a simple application to count off 25-minute increments would be relatively simple. However, you can find these applications for free, and even the ones that cost money are like $1.99 for an app that adds functionality and runs in your computer’s taskbar. So, although possible, it may not be practical. There’s always a cost vs. effort trade-off.
There have been some genuinely cool examples, too. Like Nicholas Carlini, who built a C compiler in Rust.
From the page:
I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V.
This isn’t some simple vibe coding example, and it’s impressive that we have tools to generate this today. However, even this cool example isn’t without its flaws, and that’s kind of the point of this post.
And of course, there will be outliers, too. I’m not claiming that using AI to develop alternatives is somehow impossible. It’s certainly possible, but what we are asking is whether it’s practical or well-advised. There will undoubtedly be companies that demonstrate how they saved money by developing their own in-house alternatives. This may happen in very specialized situations for very specific tasks, but the mistake is assuming these outliers are the norm. AI bros love to point to outliers as proof to justify their perspectives. Don’t fall for it. The question here is, does this happen at scale? Which I believe is highly unlikely.
Keep in mind that the world and the use cases to which software is applied are highly complex. So many unforeseen circumstances surface when applying software to problems.
100% Chance Of Vulnerabilities
No matter what happens, I can say with 100% certainty that software vulnerabilities will be everywhere, from code generated by coding agents to the generative AI functionality built into applications. This is regardless of the success of applying coding agents and vibe coding.
Security is the cost of this spray-and-pray style of development. We never solved the problem with developers introducing vulnerabilities into software, and now we are encouraging everyone to be a developer using tools they don’t understand, creating more code than ever. This was a condition I called out before with the introduction of Copilot apps.
To summarize, we now have tools that people configure insecurely, introduce vulnerabilities into code, apply them to insecure architectures, and create outputs that the creators don’t understand. What could possibly go wrong? We will have a patchwork of vulnerable applications, which means anyone with minimal knowledge can manipulate the systems in unexpected ways.
The Other Side
So, what would my detractors say? First of all, they would tell you not to believe me because I just don’t love AI enough. Which is a very cryptocurrency way of dealing with criticism that makes no point whatsoever.
They may also claim that I don’t understand the current moment. To this, I’d say they are confused and possibly trapped in a filter bubble. They are extrapolating capabilities from simple functionality. We aren’t there yet, to which they’d reply, “Soon.”
Finally, they will claim that AI will just figure it out. This perspective treats AI far more like a magic wand than a technology. AI really hasn’t been figuring it out in the past few years. We haven’t solved any of the major issues with the technology, such as hallucinations and prompt injection. We’ve just been getting products that pretend these issues don’t exist.
At some point, we’ll have technology capable of doing all the things these people claim, but not soon, and probably not built on top of Generative AI. Admittedly, this is speculation on my part, but at least it’s speculation based on observation.
In short, these aren’t easy problems to solve. Otherwise, they’d be solved already.
Conclusion
It’s certainly possible that I’m wrong, and we see GenAI crush software. The world is an uncertain place, and sometimes innovations have a moment and snap into place. However, I wouldn’t run to Polymarket with this bet. Success would require much of the world’s complexity to evaporate. Enterprise software engineering is far more complex than people building simple tools give it credit for. My guess is that SAP and Salesforce will still be with us five years from now, barring idiotic business decisions. The death of software is greatly exaggerated.