Perilous Tech

Risks at the Intersection of Technology and Humanity

The past few months have witnessed a rash of completely absurd AI predictions. These claims come not from the usual suspects but from the tech leaders’ mouths themselves, lending further legitimacy. However, what people fail to realize is that these are pieces of performance art. Performances enacted not for you but for a singular audience: investors.

AI Performance Art

When tech leaders and personalities make podcast appearances or speak at events, they aren’t talking to you or the audience they are in front of. They are creating performance art for investors. This has always been the case, but not to the extent we’ve seen lately. This effort has been stepped up quite a bit in the past month with some mind-numbing statements.

You can see a small sample of these performances below. Trust me, there are a lot more.

I respect Anthropic and their work, but Amodi’s statements here are nonsense. You read that right, not AGI, but ASI by 2026 or 2027. As a reminder, 2026 is basically a year away. If he believes this (which I doubt), it’s based on vibes, not actual evidence or observations.

He’s just talking Shmidt. This is certainly the dream. However, just because LLMs are “good at code” doesn’t automatically lead to recursive self-improvement. Even if we have promising experiments, they will likely be too unreliable or vulnerable to put into production.

Ah, there he is. That’s right, we’ve been getting 10x improvement every year. You might ask where this has been happening, which would be the correct question.😆

Not to be outdone by Elon, how about 10,000x smarter than a human? I mean, what does that even mean? These numbers are just made up and absurd. These ridiculous exponential increases are something I’ve already made fun of in the past.

Speaking of silly exponential numbers, there was a rumor that someone at OpenAI said Orion, OpenAI’s next model, would be 100x more powerful than GPT-4. If it were, it wouldn’t be called Orion. It would at least be called GPT-5, and people wouldn’t shut up about it. Here’s a prediction. Orion’s performance will disappoint because people’s expectations are far higher than what will be delivered. The expectation is GPT-5, not GPT-4.1.

Genuflect in front of thine server farm, lest thy models collapse!

Someone may have uttered deep learning is divine because it starts with a “D,” but they didn’t mean that literally. Oddly enough, the lack of shame in which he delivers these lines is really something to behold. Although it seems like there’s a mini Altman hype man inside of his head controlling the words coming out of his mouth, in reality, it’s probably because OpenAI is projecting losses of 14 billion dollars in 2026. Ouch! He needs people to believe, to have faith. Preach!

Even when Altman and others talk about the potential of their technology to destroy humanity, it’s a sales pitch. They claim their technology is so good and so powerful it could wipe us all out, so please give us money. This is something I referred to before as the human extinction humble brag.

This is the same behavior we made fun of when the crypto bros did it, but we now take it seriously because it’s AI. Say what you want about the crypto bros. At least putting Dogecoin on the moon is possible. Finding god lurking in gradients is something else entirely.

Oh yeah, there they are. No comment necessary.

None of the previous statements are grounded in any reality. They are all bullshit. And whenever someone is bullshitting, it’s hard to determine if they actually believe their statements or not. The world is far more complex than we give it credit for, and it’s also true that sometimes, an unexpected innovation comes along and changes everything. This is what they all hope for. That some innovation clicks before the clock runs out on investment. Or divine intervention in Altman’s case.

The sad part is that almost everyone will forget these silly predictions. No doubt many have forgotten about them already. There is never any accountability and yet people continue to hang on their every word. The problem is there is no one place where these predictions are collected and presented like the bullshit Picaso it is. If there is, please let me know.

Why Now?

The increase in hype-laden statements is because, until recently, AI hype had been mostly self-fueling. But 2024 has brought unwanted criticism to the generative AI space. I noticed this starting to take a turn in July when Goldman Sachs released their report: GenAI Too Much Spend Too Little Benefit.

After this report was released, the media began to report more critical assessments of generative AI. These critical assessments spelled out that the generative AI craze might be a bubble. But that’s not the worst of it.

If you’ve watched any of my conference presentations this year, you’ve probably heard me talk about the performance plateau in large language models. Saying that, if you are hoping for much more capable models to solve your problems, they aren’t coming any time soon. This plateau was obvious when looking at the data but was never acknowledged, but people are noticing it now. This doesn’t mean LLMs are useless, people are using them for a variety of tasks today. What it means is that if you require greater capability and reliability, you may be waiting a while.

Now, news reports like this from Bloomberg cover diminishing returns, and other articles talk about a shift in strategy toward other mechanisms to address the slowdown. Of course, none of this is represented by the leaders in their wild predictions.

Combine this plateauing with the fact that model training appears to be the fastest-depreciating asset in history, and the picture doesn’t look good.

When you look at the financials, why train new foundation models yearly when the benefit is so low? Maybe as a marketing exercise or other activity unrelated to model improvement, but the costs don’t seem to align. As I mentioned earlier, OpenAI is projecting losses of 14 billion dollars in 2026. This hemorrhaging of money is non-sustainable.

But all of this is rather Orwellian. We are told to reject the evidence of our eyes.

No, AGI Isn’t Imminent

Here’s a graphic from Reddit charting the prediction of when we’ll achieve AGI. Demis Hassabis is the one on the list I’d take most seriously. Deep Mind is a serious AI lab doing serious work and not putting all their eggs in one big LLM basket. I still think these are mostly guesses with some hopes mixed in. The reason Kurzweil is close to Hinton and Hassabis is because he went The Price Is Right route and chose his number based on the fact that it was one less than 2030.

However, tech leaders know that predictions like these trigger influencers. Influencers are the hype agents trying to get people stoked. When people are stoked, investors take notice. So many social media feeds of so many supposedly serious people are turning out to be pretty embarrassing and will be even more so in a year or two. If anyone had any attention span left, that would be worrisome.

Quite a lot of truth is found in this simple statement from Pedro Domingos. Many assume that because things like LLMs have so much information, they must be close to AGI. But instinctively, we know that access to information isn’t knowledge. Otherwise, everyone with a web search would be a genius. Then again, Pedro’s comment aligns with my biases, so I guess I have to be careful.

Hype Has Consequences

You might ask, why do I care about any of this? Well, it’s because hype has consequences. The inevitable outcome of all this hype is that technology gets shoved down our throats. Generative AI is easy to manipulate and potentially unreliable, a cocktail for disaster in high-risk applications. The danger is that we rush something that appears to be working into production and hope for the best. Over the next couple of years, we’ll see the push to cram generative AI further into the systems and processes we use on a daily basis, including high-risk and safety-critical systems.

This push won’t be based on generative AI being the best tool for the job but on a push for monetization. Tech companies need to show some return on the monumentally massive investment they’ve made, so this push becomes another form of performance art for investors. Tech companies are throwing a plate of spaghetti at the wall and hoping that a noodle sticks.

Why do you think there is an increased coziness with the US government? They don’t see an ability to make a difference. They see dollar signs. Things like DOGE and Sam Altman co-chairing the new mayor of San Francisco’s transition team are like asking drug dealers for guidance on prescribing drugs. Despite this, I truly hope DOGE succeeds because if it fails, it will be bad for a lot of people, so my fingers are crossed.

Government streamlining and modernization are noble goals, and I think AI and automation certainly play a role, but it’s about choosing what’s best for the people these systems serve. In this scenario, you are optimizing for different things that may not be intuitive in a traditional business sense. These are real systems affecting real people, not toy examples in the lab.

I joked that this could lead to some strange Kafkaesque nightmare in which people are stuck in a loop, unable to get a resolution. Or, you have an algorithm that works so well at saving money by denying people benefits. This is easy to shrug off if you don’t require government assistance, but it’s an entirely different story for people who rely on it or when a disaster strikes. These updated systems and reduced staff scenarios may appear to work and deliver promises in the immediate implementation but fail spectacularly when they are needed most. We caught a glimpse of this with the Healthcare.gov launch, and that was just a website.

But, China Tho

Typically, you get the But China Tho argument when there’s any pushback. This argument states we must remove all the brakes and accelerate into oblivion because of the risk of China getting to AGI first. Damn the harm, full speed ahead.

However, if we could squeeze some extra performance out of a car by removing the steering wheel, we still wouldn’t do it because we understand something simple. A car’s performance isn’t solely based on acceleration, and neither is AI. Acceleration is bad if the vehicle is speeding in the wrong direction.

Recently, the U.S.-China Economic and Security Review Commission put out a report that recommended creating a Manhattan Project-like program dedicated to racing to and acquiring an AGI capability. In this section of the report is this:

Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership.

There’s a predictable outcome here if something like this moves forward. Agendas and ulterior motives will co-opt this project, not setting the United States up for success. There’s a current tunnel vision with LLMs that has people deep in the sunken cost fallacy.

The United States’ strongest assets are its tech companies. Despite my criticism of their hype and lack of respect for privacy, they are vital to the success of the US economy. I’m also highly critical of the sentiment some have adopted to “break up the tech companies.” I’m not a tech critic, I’m a hype critic. However, setting up a massive pot of money that they can draw from, like an ATM, is not something I’m in favor of either.

Here’s something else to think about. What if, by maintaining a relentless hyper-focus on LLMs, China (or another country) gets to AGI first by focusing on other approaches? This is a real risk.

What if, by maintaining a relentless hyper-focus on LLMs, China (or another country) gets to AGI first by focusing on other approaches?

I may have to eat my words at some point if AGI does sprout from LLMs. It’s certainly not impossible. However, if we cobble together something that resembles AGI from generative AI, it will most likely be AGI based on toothpicks and bubblegum. What I mean is a whole lot of patches, layers, plugging, and human intervention.

My AGI Prediction

Okay, so now it comes to me. What’s my AGI timeline prediction? Well, I predict we’ll have AGI by—

Of course, I’m not going to answer that. I’d guess based on no evidence, just like many others I’ve highlighted. I have no particular insight, and I’m not working at a research lab trying to build AGI. Despite this, I have some thoughts related to my area of expertise.

The last slide of my keynote at Agile DevOps USA in October mentioned AGI. Discussing this slide, I made a few statements about how I didn’t think that AGI would be built from LLMs and that it probably wouldn’t come by 2026 or possibly even 2029. So, I guess that’s as close to a timeline prediction as you’ll get from me on AGI—not when I think it will happen, but when I think it won’t happen. I’m certainly not an AGI skeptic, it’s possible and will happen.

More importantly, I predicted that no matter what form AGI takes, it will be vulnerable to attack and manipulation. I mentioned that this would especially be true if it were built on top of LLMs (remember, toothpicks and bubblegum.) Maybe something about generalizing across many tasks in the real world makes things vulnerable. This is something I mentioned back in February of 2023.

To make matters worse, we may be stuck with the vulnerabilities that get identified because there is no fix. Think of examples like adversarial policy attacks. We’ve all heard of AlphaGo beating Lee Sedol at Go. However, most don’t know that even average Go players can beat superhuman Go AIs using adversarial policy attacks. Yes, the stakes are low in the game of Go. However, this is a cautionary tale.

We may be stuck with the vulnerabilities that get identified because there is no fix.

Combine these potential issues with the fact that humans don’t do a good job of finding vulnerabilities in a system before it is launched into production, and we have a recipe for lingering problems. When these lingering problems are in high-risk systems, disasters are only a couple of steps away, and there’s not much we can do about it.

4 thoughts on “AI Performance Art and Absurd Predictions

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading