People making predictions fall into three general camps: those selling something, delusional ignoramuses, and the rare case of thoughtful reflectors. I’d like to think I fall into the last category, but since I’m not selling anything, I fear I may be part of the former. Regardless of category, we seem to forget that the world confounds prediction through complexity, even for the most ardent of reflectors.
Another common playbook in our era is to make so many predictions that some are bound to come true, then cite those cases as proof that you are an oracle. I see this happening frequently. It’s the exploitation of our short attention spans. This isn’t magical foresight, it’s statistics.
Regardless of my opinion on tech predictions, people seem to love hearing them. While I was at the AI Security Summit in London, several people asked me for my predictions for 2026, since in my keynote, I described hype shifting back to embodied systems. I guess I asked for it. But, please don’t listen to me or anyone else making predictions about 2026. Well, at least I’m not trying to sell you anything.
I think people have an instinct that 2026 feels more uncertain than 2025. There is a sense of desperation in the air as companies push to prove there is no AI bubble by wallpapering everything with AI.
Now that I’ve complained about making predictions and how uncertain 2026 feels, here are my predictions/vibes/observations for 2026.
1. Agent Double Down
“No, no. Last year wasn’t the year of the agent. THIS year is going to be the year of the agent.” I can already hear people course-correcting from their predictions last year. 2025 was the year generative AI was going to take off, resulting in massive layoffs and tons of revenue. Instead, we hear speculation about the AI bubble about to pop.
Despite the ongoing issues and high manipulability of agents, people will continue to double down. We didn’t resolve any issues with agents in 2025, so they’ll be with us again in 2026. But with the doubling-down efforts, people will try to convince you that the issues are solved or didn’t matter much in the first place.
Most business leaders who ask for agents and insist on using AI have no idea how the technology works, what it’s capable of, or the associated risks. This is not a recipe for success. Deploying this technology successfully requires a firm understanding of capabilities and realities on the ground. Of course, having appropriate expectations helps too. This isn’t happening, as MIT found when they identified that 95% of GenAI pilots failed.
I’m not claiming that agents are useless. They have their uses and can be employed in certain scenarios to augment human activities. And yes, this can be done successfully. What I’m saying is, they aren’t the utopian, headcount-reducing technology we were promised in 2025, and the data bears this out.
The truth is, if your use case has a low cost of failure and can tolerate errors and manipulations, you don’t need to wait for a new innovation. You can deploy agents today. How well they perform, on the other hand, is a different story. Performance will vary by use case and environment.
2. Embodiment Hypes Again
Although the hype of generative AI will continue in 2026, we’ll see much more hype of embodied systems. Embodied systems are those that interact and learn from the real world. Think robots, self-driving cars, drones, etc. This category is certainly no stranger to hype.
Embodied systems are always ripe for hype because they tend to be more tangible and less behind-the-scenes. There will undoubtedly be some real improvements in this area. Unfortunately, these real improvements will provide ammunition for the hype cannon. Any modest improvement will be pointed to as exponential. For example, Elon Musk recently said robots wouldn’t just end poverty, but also make everyone rich. Utopian abundance is often talked about but never rationally explained.
3. Security Issues Continue To Rise
Security issues will not only persist but also accelerate. How can they not? With more AI writing more code and more code being pushed by inexperienced people, that’s a recipe for security issues. But to quote the late American philosopher Billy Mays, “But wait, there’s more!” As more applications are developed to outsource functional components to generative AI, the application itself becomes highly vulnerable.
Unknowns will continue to plague applications and products, leading to security issues. If you’ve seen any of my conference presentations over the past couple of years, you’ll have heard me talk about these unknowns. For example, we now have conditions in which developers don’t know what code will execute at runtime.
We security professionals aren’t doing ourselves any favors. Much of the guidance on AI security is overly complex, doesn’t align with real-world use cases, and doesn’t help organizations realize value quickly. We are not rising to the occasion.
4. AI Backlash Builds
AI backlash will continue to build in 2026. A vast majority of people on the planet find tech bros abhorrent. Talking about technology as if it’s magic and CEOs foaming at the mouth to replace people leaves a bad taste in the mouth. Also, the shoving AI into every possible crevice of our existence isn’t a condition that a vast majority of people want. We are getting AI in everything, whether we want it or not.
2026 will be a challenging year for tech companies. They have to prove their investments are paying off. As we enter the fourth year of the generative AI craze, companies are still hemorrhaging money. This will lead to more intense claims, hype, and AI in everything. Backlash will certainly result. As to what form this backlash takes or how big it becomes, it’s anyone’s guess.
5. Negative Human Impacts Gain More Attention
When you mention the topic of AI’s negative impacts on humans, people almost universally think of job displacement. However, this isn’t even the most impactful effect on humans. The human impacts of AI have been a focus of mine for years. This is the main focus of Perilous.tech where I’ve covered topics such as cognitive atrophy, skills decline, devaluation, dehumanization, and on and on.
I believe more people are recognizing the human impacts of AI, and it will receive far more attention in 2026. Today, the most extreme examples, such as people committing suicide or AI psychosis, get all of the attention, but this is starting to shift.
I recently saw Jonathan Haidt mention these cognitive and developmental issues, referencing both Idiocracy and The Matrix. Two references I’ve also made in the past couple of years. These are natural conclusions once you consider the facts on the ground. AI can make you stupid and overconfident in an environment that seems like it’s already saturated with stupid and overconfident people.
6. OpenAI’s Device Flops
OpenAI is working on a device, and it’s going to be the most world-changing thing ever. It will demonstrate that OpenAI absolutely has a moat. After all, they’ve hired Johnny Ive! You sense my sarcasm.
I’m not sure what form OpenAI’s device will take or even if it will be launched in 2026, but it’s rumored to be a small, screenless device with a microphone and camera. This road has been traveled before, a couple of examples are the Humane pin and the 01 light. These devices failed for the same reason OpenAI’s will. It’s not that these devices lacked capabilities, it’s that they directly conflicted with culture. We have a screen-based culture, and now OpenAI expects people to give up the screens? No chance.
People are accustomed to having their experiences mediated, and screens are a large part of that. There’s an idealized vision that people will wear these devices and use them to make sense of the world. Unfortunately, in our current culture, people aren’t curious about the world or look at it with a sense of wonder. They want to transform the world into content. Everyone on the planet now has camera eye, and nobody is going to trust a wearable to frame content.
The device will also be visible to others, so it will signal something about you as a person, and what it signals is nothing good. In addition, if the device has a microphone and camera, public shaming will further lead people to either abandon it or avoid purchasing it altogether, regardless of its functionality.
There’s also the verification aspect. People have become accustomed to degraded tech performance, and they will just not want to talk to their neck and hope that the device takes some action on their behalf. They’ll want to verify.
Remember the GPT Store? Yeah, nobody else does either, including the influencers who claimed it was the new AppStore. We’ll get overwhelming hype followed by a belly flop the size of the US economy, regardless of whether the device is launched in 2026 or 2027.
Conclusion
Buckle up, we aren’t through the hype yet. We are in an era where faith in gods is replaced by faith in tech, and people can gamble on the mundane aspects of daily life. 2026 is going to be weird.