You might wonder why AI companies are working on seemingly simple and unimportant advancements in AI when there are much more significant problems to solve. Why would companies trying to create AGI get sidetracked by focusing on potentially already-solved problems? A couple of examples are OpenAI’s voice cloning, Google’s VLOGGER, and Microsoft’s VASA-1.This research, for many, only seems to have use cases for fakes and frauds, but I believe this work signals something much deeper: that we could be near the peak of LLM capabilities. With AGI off the table, it is time to go deep and get very personal.
Peak LLM
Although you can do some cool things with LLMs, and we’ll no doubt see further applicability in other use cases, it’s a far cry from their touted value. You know what I’m talking about, the more impactful than the printing press crowd that still seems to swarm every conversation on the topic. These people talk about 10x, 100x, and even 1000x productivity boosts with LLMs. Compared to bold AGI claims and nonsense productivity levels, a 10% efficiency gain seems inconsequential.
The Wall Street Journal reported that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models last year but brought in only $3 billion in revenue. Ouch! There is reporting on the dismal outlook for generative AI, and some foresee a new Dotcom crash.
People have become more skeptical of claims (as they should), and it seems that many more people are noticing. You can’t believe the demos you see. Many are highly controlled or manufactured altogether. Even the SORA demo that everyone lost their minds over wasn’t what it purported to be.
LLMs are under-delivering on their overhyped promises.
I don’t know what to think about the economic angle. It’s not my area of expertise. I just now know that LLMs are under-delivering on their overhyped promises. Where leads economically, I don’t know.
Many LLMs, including open-source models like Llama 3, are catching up to GPT-4. Even if they don’t have the exact level of performance, they are close, which should tell us something. We may be hitting peak LLM capabilities. This means GPT-5 won’t be AGI or exponentially better than GPT-4. GPT-5 may be better than GPT-4 in some ways, but it is far from a groundbreaking explosion of capabilities.
This lack of performance isn’t going unnoticed at the companies building the technology either. This is why a new approach is needed by companies looking to monetize AI investments further. There’s about to be a shift away from a focus on AGI (although they’ll still talk about it) and ever more capable models to you. That’s right, you.
You’re Next
Just because we may be hitting peak LLM capabilities doesn’t mean things will stop. When you’ve reached the limit of going wide (general), you go deep (personal). This will be a sleight of hand shifting from purely training larger models on more data, creating more capabilities in a broad sense, to deeper, more personal integration.
These companies will make it all about you, not because you are the most important aspect, but because you are where the data is. With systems that are closer to you and more integrated with your data and activities, these companies are hoping to make the products more sticky, with the beneficial exhaust of having access to all your data.
The hope is that an epiphany will sprout from your screen as you find the same tools you previously could take or leave now indispensable. Or maybe even fool yourself with the tech, as the public launch of ChatGPT showed. ChatGPT became a social contagion not because people found it so indispensable but because we are bad at constructing tests and good at filling in the blanks.
But don’t take my word for it. Sam Altman has already started pivoting in this direction. Here’s what he says about the goal of AI: “A super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” That’s pretty creepy. But there’s more.
You can make the tech more sticky by allowing people to personalize and customize in more advanced ways. Technology like voice cloning and animating faces supports this customization aspect. When you can choose whoever you want to be your assistant’s AI avatar, you can anthropomorphize it more. How would you feel if a random stranger used your face and voice as their personal assistant? What about a family member? Is this creepier still? Oddly enough, it serves no purpose for the individual user. It doesn’t make the tool any smarter or more capable. It only exists to manipulate us or allow us to manipulate ourselves.
In the end, you’ll be blamed for LLMs’ lack of success by not allowing them to plunge deeply enough into your life. There’s a saying that if you don’t pay for something, then you are the product. Well, in the age of generative AI, you can pay for something and still be the product. The future’s so bright 😎
Even Deeper
AI companies are doing their best to make this technology unavoidable. We are getting AI whether we want it or not. It’s being baked into the very foundations of our computing systems, and even your humble mouse hasn’t escaped this integration.
How you deactivate these integrations will be anyone’s guess, as the flood of new integrations infects every application imaginable. A security check will be due soon, but security issues aren’t the only problem. As I’ve said, we are creating a brave new world of degraded performance. In an attempt to make hard things easier, we may make easy things hard.
Applications of narrow AI are cool and can be incredibly useful for certain tasks, but does it warrant hooking everything up to LLMs and hoping for the best? I don’t think so, and this approach is fairly misguided, opening us up to unnecessary risks.
Conclusion
We must be much more selective before blindly accepting deep data access and personal integration for these tools. This can start with a few relatively simple questions. What do we hope to gain from this access? How will this provide a measurable benefit? And, most importantly, are the trade-offs worth it? The answers to these questions will be different for everyone.
In many cases, it appears that for the small price of your soul, you can appear and sometimes feel marginally better in some aspects but be measurably worse in others. Does that sound like a good trade?
One thought on “Peak LLM: When You Can’t Go Wide, Go Deep”