Perilous Tech

Risks at the Intersection of Technology and Humanity

A cowboy on a horse trying to lasso dumpster fires

The past couple of years have been fueled entirely by vibes. Awash with nonsensical predictions and messianic claims that AI has come to deliver us from our tortured existence. Starting shortly after the launch of ChatGPT, internet prophets have claimed that we are merely six months away from major impacts and accompanying unemployment. GPT-5 was going to be AGI, all jobs would be lost, and nothing for humans to do except sit around and post slop to social media. This nonsense litters the digital landscape, and instead of shaming the litterers, we migrate to a new spot with complete amnesia and let the littering continue.

Pushing back against the hype has been a lonely position for the past few years. Thankfully, it’s not so lonely anymore, as people build resilience to AI hype and bullshit. Still, the damage is already done in many cases, and hypesters continue to hype. It’s also not uncommon for people to be consumed by sunk costs or oblivious to simple solutions. So, the dumpster fire rodeo continues.

Security and Generative AI Excitement

Anyone in the security game for a while knows the old business vs security battle. When security risks conflict with a company’s revenue-generating (or about to be revenue-generating) products, security will almost always lose. Companies will deploy products even with existing security issues if they feel the benefits (like profits) outweigh the risks. Fair enough, this is known to us, but there’s something new now.

What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve. This is new because it involves all risk with potentially no reward. These companies are hoping that users define a use case for them, creating solutions in search of problems.

What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve.

I’m not referring to the usage of tools like ChatGPT, Claude, or any of the countless other chatbot services here. What I’m referring to is the deep integration of these tools into critical components of the operating system, web browser, or cloud environments. I’m thinking of tools like Microsoft’s Recall, OpenAI’s Operator, Claude Computer Use, Perplexity’s Comet browser, and a host of other similar tools. Of course, this also extends to critical components in software that companies develop and deploy.

At this point, you may be wondering why companies choose to expose themselves and their users to so much risk. The answer is quite simple, because they can. Ultimately, these tools are burnouts for investors. These tools don’t need to solve any specific problem, and their deep integration is used to demonstrate “progress” to investors.

I’ve written before about the point when the capabilities of a technology can’t go wide, it goes deep. Well, this is about as deep as it gets. These tools expose an unprecedented attack surface and often violate security models that are designed to keep systems and users safe. I know what you are thinking, what do you mean, these tools don’t have a use case? You can use them for… and also ah…

The Vacation Agent???

The killer use case that’s been proposed for these systems and parroted over and over is the vacation agent. A use case that could only be devised by an alien from a faraway planet who doesn’t understand the concept of what a vacation is. As the concept goes, these agents will learn about you from your activity and preferences. When it’s time to take a vacation, the agent will automatically find locations you might like, activities you may enjoy, suitable transportation, and appropriate days, and shop for the best deals. Based on this information, it automatically books this vacation for you. Who wouldn’t want that? Well, other than absolutely everyone.

What this alien species misses is the obvious fact that researching locations and activities is part of the fun of a vacation! Vacations are a precious resource for most people, and planning activities is part of the fun of looking forward to a vacation. Even the non-vacation aspect of searching for the cheapest flight is far from a tedious activity, thanks to the numerous online tools dedicated to this task. Most people don’t want to one-shot a vacation when the activity removes value, and the potential for issues increases drastically.

But, I Needed NFTs Too

Despite this lack of obvious use cases, people continue to tell me that I need these deeply integrated tools connected to all my stuff and that they are essential to my future. Well, people also told me I needed NFTs, too. I was told NFTs were the future of art, and I’d better get on board or be left behind, living in the past, enjoying physical art like a loser. But NFTs were never about art, or even value. They were a form of in-group signaling. When I asked NFT collectors what value they got from them, they clearly stated it wasn’t about art. They’d tell me how they used their NFT ownership as an invitation to private parties at conferences and such. So, fair enough, there was some utility there.

In the end, NFTs are safer than AI because they don’t really do anything other than make us look stupid. Generative AI deployed deeply throughout our systems can expose us to far more than ridicule, opening us up to attack, severe privacy violations, and a host of other compromises.

In a way, this public expression of look at me, I use AI for everything has become a new form of in-group signaling, but I don’t think this is the flex they think it is. In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.

In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.

Advice Over Reality

Social media is awash with countless people who continue to dispense advice, telling others that if you don’t deploy wonky, error-prone, and highly manipulable software deeply throughout your business, then they are going to be left behind. Strange advice since the reality is that most organizations aren’t reaping benefits from generative AI.

Here’s something to consider. Many of the people doling out this advice haven’t actually done the thing they are talking about or have any particular insight into the trend or problems to be solved. But it doesn’t end with business advice. This trend also extends to AI standards and recommendations, which are often developed at least in part by individuals with little or no experience in the topic. This results in overcomplicated guidance and recommendations that aren’t applicable in the real world.

The reason a majority of generative AI projects fail is due to several factors. Failing to select an appropriate use case, overlooking complexity and edge cases, disregarding costs, ignoring manipulation risks, holding unrealistic expectations, and a host of other issues are key drivers of project failure. Far too many organizations expect generative AI to act like AGI and allow them to shed human resources, but this isn’t a reality today.

LLMs have their use cases, and these use cases increase if the cost of failure is low. So, the lower the risk, the larger the number of use cases. Pretty logical. Like most technology, the value from generative AI comes from selective use, not blanket use. Not every problem is best solved non-deterministically.

Another thing I find surprising is that a vast majority of generative AI projects are never benchmarked against other approaches. Other approaches may be better suited to the task, more explainable, and far more performant. If I had to take a guess, I would guess that this number is close to 0.

Generative AI and The Dumpster Fire Rodeo

Despite the shift in attitude toward generative AI and the obvious evidence of its limitations, we still have instances of companies forcing their employees to use generative AI due to a preconceived notion of a productivity explosion. Once again, ChatGPT isn’t AGI. This do everything with generative AI approach extends beyond regular users to developers, and it is here that negative impacts increase.

I’ve referred to the current push to make every application generative AI-powered as the Dumpster Fire Rodeo. Companies are rapidly churning out vulnerable AI-powered applications. Relatively rare vulnerabilities, such as remote code execution, are increasingly common. Applications can regularly be talked into taking actions the developer didn’t intend, and users can manipulate their way into elevated privileges and gain access to sensitive data they shouldn’t have access to. Hence, the dumpster fire analogy. Of course, this also extends to the fact that application performance can worsen with the application of generative AI.

The generalized nature of generative AI means that the same system making critical decisions inside of your application is the same one that gives you recipes in the style of Shakespeare. There is a nearly unlimited number of undocumented protocols that an attacker can use to manipulate applications implementing generative AI, and these are often not taken into consideration when building and deploying the application. The dumpster fire continues. Yippee Ki-Yay.

Conclusion

Despite the obvious downsides, the dumpster fire rodeo is far from over. There’s too much money riding on it. The reckless nature with which people deploy generative AI deep into systems continues. Rather than identifying an actual problem and applying generative AI to an appropriate use case, companies choose to marinade everything in it, hoping that a problem emerges. This is far from a winning strategy. Companies should be mindful of the risks and choose the right use cases to ensure success.

One thought on “Yippee Ki-Yay: Risk Over Reward in the AI Dumpster Fire Rodeo

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading