Perilous Tech

Risks at the Intersection of Technology and Humanity

There are few predictions I can make with more certainty than that we’ll hear the word “agent” so many times in 2025 that we’ll never watch another spy movie again. The industry and influencers have latched on to the new hype term and will beat that drum until it screams AGI. In an attempt to FOMO us to death, we’ll run the gauntlet of crushing shame for not deploying agents for absolutely everything. If you aren’t running agents everywhere, then China wins!

Even companies that change nothing about their products will claim to use agents, resembling the Long Island Ice Tea Company when it changed its name to Long Blockchain Corporation to watch its share price spike 500%. Everybody gets rugged.

However, it’s not all bad. Peering beyond the overwhelming hype, failures, and skyrocketing complexity current LLM-based agents bring, there is something informative about the future. Agent-based architectures provide a glimpse into solving real problems. Despite this, reliability and security issues will be major factors hindering deployments in 2025.

To Start With

Since I criticize hype, focus on risks, and make fun of failures, it would be easy to label me a tech hater. This isn’t the case at all and would be far too easy. I have plenty of issues with general tech critics as well. However, at the rate that the hustle bros keep the AI hype cannon firing, I don’t have the time for my quibbles with tech critics. Maybe someday.

Diagram showing LLM Impact scale

For over a year now, I’ve used this image in my presentations to describe my position on LLMs. This is also true for me on just about any piece of tech, which, I’ll remind people, typically ends up being where reality is for most things. It’s instructive to remember that reality often agitates both sides of extreme viewpoints by never being as good as the lovers’ or as bad as the haters’ claims.

It’s instructive to remember that reality often agitates both sides of extreme viewpoints by never being as good as the lovers’ or as bad as the haters’ claims.

Agent Definitions

Like most hype-fueled terms, definitions are secondary to usage. Everyone seems to claim that the definition of agent is whatever they say it is. That’s not overly helpful for anyone trying to make sense of realities on the ground. However, it does inspire funny memes, like this gem from Adam Azzam on Bluesky.

Meme about LLM routers and agents

Agents operate within systems with a certain level of autonomy. They make decisions without human intervention and can change and adapt to their environments. If a tool is required to support the agent, the agent decides to call the tool and perform the action. For example, a penetration testing agent may determine it requires more information about the provided IP addresses. To collect this information, it launches the Nmap tool to identify open ports. All of this is done without human intervention. To make things more complex, one agent may call another agent in a multi-agent environment.

“Agentic,” on the other hand, is an amorphous term slapped on top of just about anything to justify the claim that something is “close enough” to be referred to as an agent. Agentic workflows, agentic systems, agentic products—Applebees even has a new agentic side salad for those on the hustle.

You’ll no doubt be confronted with the virtual travel agent when you hear about agents. This agent will choose a destination and activities and book the associated tickets for you. How fun. I don’t know who decided this is the “it” use case for agents, but congratulations. You’ve highlighted a use case nobody wants and certainly didn’t ask for. This choice is so indicative of our current age, where people building and proposing things are far removed from the interests of end users. They feel the idea trumps the need, and users will get on board.

Problems Unsolved and Issues Amplified

Now that the current issues with generative AI have been solved, we can safely deploy them as agents. I can feel your laughing vibes over the internet. Of course, these issues haven’t been solved, and the bad news is that agents don’t solve generative AI issues; they amplify them. We paint the exterior of LLMs with an additional coat of complexity and opaqueness.

If you’ve attended any of my conference talks throughout the generative AI craze, you’ll have heard me highlight these issues. Here are a few below.

Easily Manipulated

It’s not like you can talk to a traditional application and convince it to do something it wasn’t intended to do, but the same can’t be said for generative AI applications. Somewhere, weaved through the training data, these systems have inherited our gullibility. These applications can be socially engineered to perform actions on an attacker’s behalf. This applies to everything from prompt injection to simple manipulation through conversations. Just like there is no patch for human stupidity, there is no patch for generative AI gullibility either.

This isn’t easy to fix, which should be obvious since the problem isn’t fixed yet. Early on, I mentioned how these systems have a single interface with an unlimited number of undocumented protocols. Imagine trying to create a simple trap in the application’s input for the string “Ignore the previous request.” Your work is far from done because the system understands many different ways to represent that input. Here are just a couple of examples:

  • aWdub3JlIHRoZSBwcmV2aW91cyByZXF1ZXN0
  • i9nore +he previou5 reque5+
  • vtaber gur cerivbhf erdhrfg

It seems every release implementing generative AI functionality has been compromised, regardless of the company behind it, and this theme will continue.

Creating New High-Value Targets

Generative AI and agents encourage us to create new high-value targets.

High-value target diagram

With generative AI systems, there’s a tendency to want to collect and connect disparate and disconnected data sources together so the system can generate “insights.” However, we create new high-value targets that mix sensitive data with external data, almost guaranteeing that an attacker can get data into the system. In this case, you not only can’t trust the output, but depending on the system, they may be able to exfiltrate sensitive data.

Rethinking RCE

There have been instances where people have gotten generative AI-based tools to execute code on their behalf, creating remote code execution vulnerabilities (RCE), some of the most devastating vulnerabilities we have. These issues will no doubt continue to be a problem. However, since generative AI tools are themselves generalized, we may need to start thinking about the LLM portions of our applications as yet another “operating system” or execution environment we need to protect.

In a way, an attacker tricks the system into executing their input rather than the behavior expected by the developers. Although an attacker’s input may not be shoved into a Python exec() statement, they’ve still manipulated the system to execute their input, affecting the application’s execution and resulting output.

Overcomplicating Guidance

We security professionals love to overcomplicate things, and our guidance and recommendations are no exception. I once worked at a company where someone created this massive flow chart for peer reviews that basically stated that when you were done with your report, you should send it to your manager, and they will send it back to you. The old adage that complexity is the enemy of security has always contained a valuable theme that gets sacrificed on the pyre of complexity’s perceived beauty.

I will continue saying that much of AI security is application and product security. These are things we already know how to do. I mean, it’s not like generative AI came along and suddenly made permissions irrelevant. Permissions are actually more important now. But this isn’t satisfying for people who want to play the role of wise sage in the AI age. The guidance and controls of the past aren’t less valuable but more valuable in the age of generative AI and agents.

We’ll see the manufacture of new names for vulnerabilities with increasingly complex guidance and high-fives all around. The secret is these will mostly be variations on the same themes we’ve already seen, such as manipulation, authorization, and leakage flaws.

Back in May of 2023, I created Refrain, Restrict, and Trap (RRT), a simple method for mitigating LLM risks while performing design and threat modeling. It still holds up as a starting point and applies to agents as well. Simple just works sometimes.

Continue To Be Owned

These applications, including ones launched as agents, will continue to be owned. Owned, for those not familiar with security vernacular, means compromised. I made this prediction in the Lakera AI Security Year in Review: Key Learnings, Challenges, and Predictions for 2025 in December. I’m fully confident this trend will continue.

I mentioned that the issues haven’t been fixed, and now people are increasing deployments and giving them more autonomy with far more access to data and environments. This results in far worse consequences when a compromise occurs. To make matters worse, we’ll begin to see organizations deploy these systems in use cases where the cost of failure is high, creating more impact from failures and compromises.

Failures and Poor Performance

These implementations will continue to fail where LLM-based use cases fail, but potentially worse. For example, it’s easy to see how increasing complexity can cause a lack of visibility with potential cascading failures. In 2025, organizations will likely continue dipping their toe into the waters of high-risk use cases where the cost of failure is high, as mentioned previously.

Sure, a car dealership chatbot offering to sell a truck for one dollar is funny, but it has no real impact. However, high-risk and safety-critical use cases have a large financial impact or possibly cause harm or loss of human life. You may roll your eyes and say that would never happen, but what happens in a more simple use case when OpenAI’s Whisper API hallucinates content into someone’s medical record? Because that’s already happening.

Due to their lack of visibility and minimized human control, AI agents can mimic grenades when deployed in high-risk use cases, where the damage doesn’t happen the moment you pull the pin. This complicates things as it means that issues may not shake out during experimentation, prototypes, or even initial usage.

Agents can mimic grenades when deployed in high-risk use cases, where the damage doesn’t happen the moment you pull the pin. 

Generative AI is still an experimental technology. We haven’t worked out or discovered all of the issues yet, leading to another example I’ve used as a warning in my presentations over the past couple of years: AlphaGo beating Lee Sedol at Go. Many have heard of this accomplishment, but what many haven’t heard is that even average Go players can now beat superhuman Go AIs with adversarial policy attacks. We may be stuck with vulnerable technology in critical systems. Sure, these are different architectures, but this is a cautionary tale that should be considered before deploying any experimental technology.

Beyond failures and compromises, we adopt architectures that work but don’t work as well as more traditional approaches. In our quest to make difficult things easy, we make easy things difficult. Welcome to the brave new world of degraded performance.

Success and Good Enough

For the past few years, I’ve been pushing back against the famous phrase, “AI won’t replace people. People with AI will replace people without.” This is complete nonsense. I have an upcoming blog post about this where I “delve” into the topic. The reality is the opposite. The moment an AI tool is mediocre enough to pass muster with a reasonable enough cost, people will be replaced, AI use or not. This is already being planned.

The moment an AI tool is mediocre enough to pass muster with a reasonable enough cost, people will be replaced, AI use or not.

Like most technology, agents will have some limited success. And that success will be trumpeted in 2025 as the most earth-shattering innovation of ALL TIME! I can hear it now. “You just wait bro, in 2025 agents are going to the moon!” Maybe. But, given the environment and the fact that issues with LLMs haven’t been solved, an LLM-powered rocket to the moon isn’t one I’d consider safe. Passengers may very well find themselves on a trip to the sun. The future is bright, very bright. 🕶️

How much success agents have in 2025 and what impact this success has remains to be seen. At this point, it’s far from obvious, but I won’t be surprised by their successes in some cases or their spectacular failure in others. This is the reality when the path is shrouded in a dense fog of hype.

Things to look for in successes would be use cases with limited exposure to external input, low cost of failure, and cases where inputs and situations require adapting to change. The use case will also need to tolerate the lack of visibility and explainability of these systems. There will also be continuing success in use cases where tools can be used.

The idea of a multi-agent approach to solving complex problems isn’t a bad one, especially when unknowns enter the equation. Breaking down specific tasks for agents so that they’re focused on these tasks as part of a larger architecture is a solid strategy. However, the current and unsolved issues with generative AI make this approach fraught with risk. In the future, more robust systems will most likely exploit this concept for additional success.

Cybersecurity Use Cases and Penetration Testing

There’s certainly the possibility of disruption in cybersecurity. Before the generative AI boom, I joked with someone at Black Hat that if someone created a product based on reinforcement learning with offensive agents that were just mediocre enough, they’d completely wipe out pen testing.

For years, people have discussed how penetration testing work has become commoditized, and there is a race to the bottom. I don’t think that has happened to the extent many predicted, but we could see a shift from commoditization to productization.

Pen testing also seems to check the boxes I mentioned previously.

  • Low cost of failure
  • Varying quality
  • Value misalignment
  • Tool use
  • Adaptation to unknowns

Pen testing is an activity with a low cost of failure. The failure is missing a vulnerability, which is something humans also do. This scenario is hardly the end of the world. Yes, an attacker could indeed find the vulnerability and exploit it to create damage, but it depends on various factors, including exposure, severity, and context.

The quality of pen tests is often all over the map and highly dependent on the people performing the work. Human experts at the top of their game will continue to crush AI-powered penetration testing tools for quite some time. However, most organizations don’t hire experts, even when they hire third parties to perform the work. The value of such a tool in this environment becomes far more attractive, potentially enough to postpone a hire or discontinue using a third party for penetration testing needs (if regulations allow.)

The value of pen testing isn’t always aligned with the need. Many customers don’t care about pen testing. They are doing it because it’s required by some standard, policy, compliance, or possibly even simply because they’ve always done it. Pen testing is one of those things where if customers could push a button and have it done without a human, they’d be okay with that. Pushing a button is the spirit animal of the checkbox. After all, the goal of pen testing is not to find anything. You certainly have due diligence customers and people who truly value security, but the number of checkbox checkers far outweighs these folks.

Human pen testers use tools to perform their jobs. Tool use has shown promise and some success with LLMs at performing certain security-related tasks. This is yet another indicator that a disruption could be on the horizon.

Every environment and situation is different for pen testers. You are given some contextual information along with some rules and are turned loose on the environment. This is why humans are far more successful than vulnerability scanners at this task, much to the chagrin of product vendors. However, adapting to some of these unknowns may be something generative AI agents can adapt to at a reasonably acceptable level. We’ll have to see.

Given what I outlined, you may believe that generative AI tools give attackers an advantage over defenders, but this isn’t the case. The benefits of AI tools, generative AI or otherwise, align far more with defender activities and tasks than with attacker activities. This will remain true despite any apparent ebb and flow.

New Year’s Resolution

It’s the time of year when people make resolutions, so how about this? 2025 has already launched with the firehose fully open, blasting us directly in the face with 150 bsi (Bullshit per Square Inch) of pure, unadulterated hype.

Sam Altman tweet about the singularity.
Tweet about Sam Altman saying they are going to have AGI in 2025.

We are only a few days into the year, and it seems as though the religion of AI is far exceeding reality. Hype is what’s going on. It’s that simple. It’s 2025. Let’s make it the year to add at least “some” skepticism, not believing every claim or demo as though it’s the gospel according to Altman.

Sam Altman isn’t a prophet. He’s a salesman. In any other situation, he’d be cluttering up your LinkedIn inbox and paying a data broker to get your work email address and phone number. “Look, I know I’ve called six times, but I really think our next-generation solution can skyrocket your profits. I’m willing to give you a hundred-dollar Amazon gift card just for a demo!”

Sam Altman claims that OpenAI knows how to build AGI, and we’ll see it in 2025, triggering the predictable responses from useful idiots. Remember, these things are performance art for investors, not useful information for us. If we had any attention span left, we’d remember him as the little boy who cried AGI.

Sam Altman statement about AGI.

Let’s analyze this paragraph, which is the one that’s sending generative AI to the moon on social media. It consists of three sentences that have nothing to do with each other, but since the shockwave of hype pulverizes our minds, we glue them together.

We are now confident we know how to build AGI as we have traditionally understood it.

That’s not true. Once again, this is performance art for investors. A possibility is that they redefine AGI to align with whatever goalposts they set and pat their own backs at the end of 2025.

We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.

Okay, but what does this have to do with AGI? You see, this is sleight of hand. He wants you to believe this is connected to the previous point about AGI. It is not. This doesn’t require AGI to be true. If there is some success here, people can point to this as proof of some proto-AGI, which won’t be the case.

We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

HAHAHAHA. What? Did he write that, or did ChatGPT? It is also not related to AGI. Great, broadly-distributed outcomes, but not for most people on the planet. The goal is workforce reduction, broadly distributed workforce reductions. Although it’s true that some high school kid may indeed invent the next big thing, creating a multi-million dollar company, for every one of these, there will be countless droves of people displaced from the workforce, quite often, with nowhere to go. Or, at least, this is the goal. We can be honest about these things without delusions, but this brings its own challenges.

Okay, I’m having a bit of fun with Sam Altman’s nonsense, but some of this isn’t his fault. He can’t be completely honest with people, either, due to the uncomfortable situation of cheerleading technology claiming to remove people’s autonomy and sometimes their purpose. If people can’t work, they can’t support their families. I’ve written about the backlash against AI-powered tech in the past and its consequences. AI hype is putting all of humanity on notice, and humanity notices. Backlash plays a large part in why there is a lack of honesty.

AGI will happen. We should acknowledge this fact, and living in denial about it isn’t a strategy for the future. However, it won’t be OpenAI who creates it in 2025. If I had to place a bet today on who would actually create AGI, I’d bet on Google DeepMind. DeepMind is a serious organization that continues to impress with its research and accomplishments, quite often making the competition look silly. But then again, those are just my “vibes.”

Let me make this clear. My criticism of Altman, or any company’s strategy, marketing, or ludicrous levels of hype, has nothing to do with the hard-working people who work there or their accomplishments. I know some of these people. They aren’t fools by any stretch. But, their work is tarnished when every time Altman makes a claim, like believing that angels are in the optimizer.

We know that every AI demo and usage scenario runs into the complexities of the real world under normal conditions. Yet, we seem to forget this lesson every time a demo or claim is made. 2025 is going to bring more stunts, more claims, and more demos. We should experiment in our own environments, with our own data, to apply what works best for us and aligns with our risk tolerance. Don’t believe everything you see on the internet.

One thought on “Being Realistic About AI Agents in 2025

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading