The past couple of years have been fueled entirely by vibes. Awash with nonsensical predictions and messianic claims that AI has come to deliver us from our tortured existence. Starting shortly after the launch of ChatGPT, internet prophets have claimed that we are merely six months away from major impacts and accompanying unemployment. GPT-5 was going to be AGI, all jobs would be lost, and nothing for humans to do except sit around and post slop to social media. This nonsense litters the digital landscape, and instead of shaming the litterers, we migrate to a new spot with complete amnesia and let the littering continue.
Pushing back against the hype has been a lonely position for the past few years. Thankfully, it’s not so lonely anymore, as people build resilience to AI hype and bullshit. Still, the damage is already done in many cases, and hypesters continue to hype. It’s also not uncommon for people to be consumed by sunk costs or oblivious to simple solutions. So, the dumpster fire rodeo continues.
Security and Generative AI Excitement
Anyone in the security game for a while knows the old business vs security battle. When security risks conflict with a company’s revenue-generating (or about to be revenue-generating) products, security will almost always lose. Companies will deploy products even with existing security issues if they feel the benefits (like profits) outweigh the risks. Fair enough, this is known to us, but there’s something new now.
What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve. This is new because it involves all risk with potentially no reward. These companies are hoping that users define a use case for them, creating solutions in search of problems.
What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve.
I’m not referring to the usage of tools like ChatGPT, Claude, or any of the countless other chatbot services here. What I’m referring to is the deep integration of these tools into critical components of the operating system, web browser, or cloud environments. I’m thinking of tools like Microsoft’s Recall, OpenAI’s Operator, Claude Computer Use, Perplexity’s Comet browser, and a host of other similar tools. Of course, this also extends to critical components in software that companies develop and deploy.
At this point, you may be wondering why companies choose to expose themselves and their users to so much risk. The answer is quite simple, because they can. Ultimately, these tools are burnouts for investors. These tools don’t need to solve any specific problem, and their deep integration is used to demonstrate “progress” to investors.
I’ve written before about the point when the capabilities of a technology can’t go wide, it goes deep. Well, this is about as deep as it gets. These tools expose an unprecedented attack surface and often violate security models that are designed to keep systems and users safe. I know what you are thinking, what do you mean, these tools don’t have a use case? You can use them for… and also ah…
The Vacation Agent???
The killer use case that’s been proposed for these systems and parroted over and over is the vacation agent. A use case that could only be devised by an alien from a faraway planet who doesn’t understand the concept of what a vacation is. As the concept goes, these agents will learn about you from your activity and preferences. When it’s time to take a vacation, the agent will automatically find locations you might like, activities you may enjoy, suitable transportation, and appropriate days, and shop for the best deals. Based on this information, it automatically books this vacation for you. Who wouldn’t want that? Well, other than absolutely everyone.
What this alien species misses is the obvious fact that researching locations and activities is part of the fun of a vacation! Vacations are a precious resource for most people, and planning activities is part of the fun of looking forward to a vacation. Even the non-vacation aspect of searching for the cheapest flight is far from a tedious activity, thanks to the numerous online tools dedicated to this task. Most people don’t want to one-shot a vacation when the activity removes value, and the potential for issues increases drastically.
But, I Needed NFTs Too
Despite this lack of obvious use cases, people continue to tell me that I need these deeply integrated tools connected to all my stuff and that they are essential to my future. Well, people also told me I needed NFTs, too. I was told NFTs were the future of art, and I’d better get on board or be left behind, living in the past, enjoying physical art like a loser. But NFTs were never about art, or even value. They were a form of in-group signaling. When I asked NFT collectors what value they got from them, they clearly stated it wasn’t about art. They’d tell me how they used their NFT ownership as an invitation to private parties at conferences and such. So, fair enough, there was some utility there.
In the end, NFTs are safer than AI because they don’t really do anything other than make us look stupid. Generative AI deployed deeply throughout our systems can expose us to far more than ridicule, opening us up to attack, severe privacy violations, and a host of other compromises.
In a way, this public expression of look at me, I use AI for everything has become a new form of in-group signaling, but I don’t think this is the flex they think it is. In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.
In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.
Advice Over Reality
Social media is awash with countless people who continue to dispense advice, telling others that if you don’t deploy wonky, error-prone, and highly manipulable software deeply throughout your business, then they are going to be left behind. Strange advice since the reality is that most organizations aren’t reaping benefits from generative AI.
Here’s something to consider. Many of the people doling out this advice haven’t actually done the thing they are talking about or have any particular insight into the trend or problems to be solved. But it doesn’t end with business advice. This trend also extends to AI standards and recommendations, which are often developed at least in part by individuals with little or no experience in the topic. This results in overcomplicated guidance and recommendations that aren’t applicable in the real world.
The reason a majority of generative AI projects fail is due to several factors. Failing to select an appropriate use case, overlooking complexity and edge cases, disregarding costs, ignoring manipulation risks, holding unrealistic expectations, and a host of other issues are key drivers of project failure. Far too many organizations expect generative AI to act like AGI and allow them to shed human resources, but this isn’t a reality today.
LLMs have their use cases, and these use cases increase if the cost of failure is low. So, the lower the risk, the larger the number of use cases. Pretty logical. Like most technology, the value from generative AI comes from selective use, not blanket use. Not every problem is best solved non-deterministically.
Another thing I find surprising is that a vast majority of generative AI projects are never benchmarked against other approaches. Other approaches may be better suited to the task, more explainable, and far more performant. If I had to take a guess, I would guess that this number is close to 0.
Generative AI and The Dumpster Fire Rodeo
Despite the shift in attitude toward generative AI and the obvious evidence of its limitations, we still have instances of companies forcing their employees to use generative AI due to a preconceived notion of a productivity explosion. Once again, ChatGPT isn’t AGI. This do everything with generative AI approach extends beyond regular users to developers, and it is here that negative impacts increase.
I’ve referred to the current push to make every application generative AI-powered as the Dumpster Fire Rodeo. Companies are rapidly churning out vulnerable AI-powered applications. Relatively rare vulnerabilities, such as remote code execution, are increasingly common. Applications can regularly be talked into taking actions the developer didn’t intend, and users can manipulate their way into elevated privileges and gain access to sensitive data they shouldn’t have access to. Hence, the dumpster fire analogy. Of course, this also extends to the fact that application performance can worsen with the application of generative AI.
The generalized nature of generative AI means that the same system making critical decisions inside of your application is the same one that gives you recipes in the style of Shakespeare. There is a nearly unlimited number of undocumented protocols that an attacker can use to manipulate applications implementing generative AI, and these are often not taken into consideration when building and deploying the application. The dumpster fire continues. Yippee Ki-Yay.
Conclusion
Despite the obvious downsides, the dumpster fire rodeo is far from over. There’s too much money riding on it. The reckless nature with which people deploy generative AI deep into systems continues. Rather than identifying an actual problem and applying generative AI to an appropriate use case, companies choose to marinade everything in it, hoping that a problem emerges. This is far from a winning strategy. Companies should be mindful of the risks and choose the right use cases to ensure success.
Weaved through the fabric of the hustle-bro culture, threaded with the drivel of influencers, lies one of the biggest cons of our current age. This is the false perception that everything we do has to be for some financial gain or public attention. With everything in life revolving around social currency or actual currency, removing friction enables us to reach value quickly. But don’t fret. The slop dealer is here with a plan to deliver us salvation, telling us that ideas are what’s important and everything else is pointless friction, needing to be optimized to reach full potential. Like so many things in our current moment, if only this were true.
Despite the decline in excitement for AI and the potential resulting market corrections, unfortunately, slop is here to stay. Although people outwardly complain about it, they are secretly glad it’s here. Being unique, thoughtful, and creative is hard. Slop allows people to swaddle themselves in a false comfort devoid of any real creativity. So, damn the torpedoes, full slop ahead.
Slop, Enshittification, and Brain Rot
Slop, enshittification, and brain rot are terms burned into our current lexicon. Although each term has a different definition, one referring to outputs, one referring to platforms, and one referring to what it does to us. When I use the generalized term slop here, I mean a mixture of all three together, a sort of thick, rancid mixture reminiscent of manure and White Zinfandel. This is because the combined term aligns better with the content and its overall impact.
The Slop Dealer
The slop dealer tells us everything is a hustle, and we need to get on board to reduce friction everywhere we can to accelerate value or be left in the dust by others using AI. They don’t talk of reasonable AI usage or prescriptions for specific tasks; it’s all or nothing. We need to surrender to the higher power. The slop dealer embodies everything that tech bro culture stands for. It’s the current equivalent of a get-rich-quick scheme, only instead of taking our money, they are stealing our attention and our satisfaction. Although sometimes they take our money too.
The slop dealer swindles us by telling us what we want to hear, that hard things are a thing of the past, and all we need is an idea. After all, everybody has ideas. These are the influencers, wanna-be influencers, and other AI useful idiots vomiting nonsense on social media. They aren’t peddling secret knowledge; they are peddling bullshit.
This pandering is done so we’ll follow them, subscribe to their newsletters, or buy their nonsense. But one of the biggest lies of all is the false impression that the value of creative pursuits lies in the end result.
Most of these people have no shame and not only believe in Dead Internet Theory, but also actively work to make it a reality. If you are wondering why people en masse find tech bro culture abhorrent, look no further than this stunning piece of work.
To quote this guy directly, “How I personally feel? I have no idea. The internet in my mind is already dead. I am the problem, right?” I get the impression this isn’t the first time he’s realized he’s the problem. Unfortunately, acknowledgement of this isn’t enough to change behavior.
The Slop Architect
The slop architect works not in traditional mediums but in ideas. To the slop architect, execution, skills, and experience are secondary, bowing at the pedestal of ideas. The fact is, most ideas are ill-thought-out, half-baked, or just plain fucking stupid. The slop architect doesn’t care because they don’t carry ideas to term; they birth them instantly, shoving them out into the world to fend for themselves as they move on to something else. I mean, the vape Tamagotchi was someone’s idea, too. Yes, please! Let’s accelerate these!
Ideas aren’t unique, precious resources, but common, run-of-the-mill, everyday occurrences for everyone on the planet. The slop architecture amplifies the fallacy that ideas are sacred and pushes the idea that if more ideas were executed, the world would be a better place. If only we had more apps, more books, more music, and the list goes on and on. This connects with people because everyone has ideas.
What most people who have thought about it for more than two seconds realize is that we don’t get to the value of an idea purely by having it. Ideas in isolation are senseless ramblings of the brain. Ideas forged and refined in the fire of execution, experience, and reflection are invaluable and fulfilling. Our ideas are never challenged in the slop architecture, leading us to new discoveries and paths, but are chucked out into the world and quickly discarded, like forgotten attempts at memes that nobody finds funny.
The AI Slop Architecture
The slop architect’s vision is implemented with the slop architecture, which presents itself as a process or application. The slop architecture is pitched as the way forward, the next-generation architecture fueling the future of humanity’s pursuits. But a simple scratch of the surface paint is all it takes to expose the entire thing as an empty shell.
When you see people pitching these types of things, it uncovers people who don’t understand creativity and certainly don’t understand where value exists in a process. Everything is a hustle for the sake of hustling. This person is hardly the only one.
Back in 2023, I jokingly created my own version of the slop architecture, which I referred to as IPIP, long before the AI influencers made it a reality.
This article was complete with a description of what would come to be known as vibe coding. “The hype has led to a new form of software development that appears to be more like casting a spell than developing software.”
Taking the slop architecture to heart, it’s not hard to find implementations already running. Books, slides, music, applications, nothing is off limits. Everything is fair game in the slop era.
Ah, Magic bookifier. Yeah, let me get on that. Any time someone puts magic in reference to AI, it’s bullshit.
People also fantasize about what advanced AI is or will be able to do. Take this use case for AGI, for example.
It reminds me of the Luke Skywalker meme where he’s handed the most powerful weapon in the galaxy and immediately points it at his face. This is informative for a couple of reasons. Movies can’t be exactly like the books for reasons other than length. They are different media with different tools. But look at the response. Human work isn’t worth protecting in the future. This is a far more common perspective than many think.
Even apps. It’s slop from all angles. So, if these tools already exist, why aren’t we all kicking back, receiving our profits? Maybe there’s something more to this than having an idea.
But we can’t just have a couple of people successfully making apps. It needs to be bigger! We are now told to await the arrival of the first billion-dollar solopreneur. Hark! The herald angels sing. Glory to the slop-born king! However, we shouldn’t get our hopes up. Setting aside how highly unlikely this is, people also win the lottery, so unless we have a mass of billion-dollar solopreneurs, it’s not proof of much. However, whenever people have strongly held beliefs, they will always point to exceptions as the rule.
It’s far more common for people to talk about a single person making a million-dollar app, and that we all can make them now. Even if this were true, it’s not like billions of people are going to make million-dollar apps or profit from a trillion new books. No degree in economics is necessary to see that the numbers don’t work. Besides, if billions of people can and will do something, then the whole enterprise becomes devalued.
The slop architecture deprives us of so much, sucking the soul out of activities until only the shriveled husk remains. There’s no learning with the slop architecture. No growth. No Reflection. No Satisfaction. It even robs us of a sense of style, something so foundational to the satisfaction of human artistic pursuits. But all things require sacrifice on the pyre of optimization. In the end, the slop architecture doesn’t democratize. It devalues, degrades, and destroys.
In the end, the slop architecture doesn’t democratize. It devalues, degrades, and destroys.
The Friction Is The Point
I’m going to let my friends in tech in on a secret, which isn’t a secret at all. The friction of an activity is directly related to the value you receive from it. The mistake being made is comparing an activity’s friction to the load time of an application or streamlining a user interface. I’ve written previously about how the next generation could be known as The Slop Generation and how we continue to devalue art. However, the removal of friction creates harmful follow-on effects.
Imagine telling Alex Honnold, “Dude, you don’t need to free solo El Capitan. We have a helicopter that can drop you off at the top.” People may see this example as silly because Alex obviously climbs mountains for reasons other than getting to the top, but it’s a mistake to assume other pursuits don’t contain similar value purely because they aren’t mountain climbing. Deep experiences don’t result from things that provide instant gratification or have little friction. Nobody finds meaning in a prompt or the resulting generation.
Deep experiences don’t result from things that provide instant gratification or have little friction.
People may see this example as silly because climbing a mountain without ropes is obviously different from something like writing a song. Except it’s not when viewed through the lens of experience. Alex Honnold doesn’t free solo mountains to get to the top or because ropes and safety equipment are too expensive; he does it because he knows there is value in the friction of his experience. He’s both challenging himself and learning about himself at the same time. He’s having an actual experience, which is hard to describe to people who have never had one. This experience enriches the conclusion of the activity, the accomplishment, which coincidentally happens to be getting to the top. However, when pursuits are framed in terms of the end results, it appears that reaching the top is the goal, and the removal of friction is logical.
Most people will never free solo a mountain, compete in the Olympics, or achieve any of the other remarkable feats that athletes at the top of their game accomplish, but that doesn’t mean we can’t have similar and fulfilling experiences, and we do this through exploration and conquering friction. When you are operating at the top of your game, you realize you aren’t competing with others, but yourself.
An artist puts a piece of themselves inside every work of art they create. AI deprives artists of having a piece of themselves included in the art, making the generated output purely an artifact of running a tool.
Slop Is Here To Stay
Immediately after Ozzy Osborne died, Oz Slop invaded social media. The prince of darkness himself fell victim to people’s boredom and lack of creativity. People chose to pay tribute to him, not through stories and anecdotes, but by slopping him into manufactured content. I can’t think of a more insulting way to pay tribute to an artist, but this is our future. Slop instead of something to say. Slop instead of stories and memories. Slop instead of emotion. Slop as a coping mechanism. May the slop be with you.
A disheartening thought is that no matter what happens to the market for generative AI, the slop will remain. People post this slop not because they enjoy it, but purely because it gives them something to post. Slop content is a stand-in for having something to say. It’s easy to generate and requires little thought, the perfect complement to today’s reactionary and performative social media environments.
In a way, this trend could create a new line of demarcation, where we start referring to things as “Before Slop” and “After Slop” to identify the creative expressions that preceded and followed the arrival of AI-generated content.
Conclusion
In the end, the slop architecture doesn’t generate experiences. Nobody is going to be on their deathbed mulling over their favorite prompts or sit down with friends and reminisce about the time they poked at a generative AI system for hours trying to get it to generate a particular image. The slop architecture doesn’t create a legacy or generate stories worth remembering or worth sharing, just pieces of forgotten garbage littering the digital landscape.
What’s the effect of exposing children to AI at a very young age? Well, we are about to find out. President Trump signed an executive order called Advancing Artificial Intelligence Education For American Youth, and, in the face of the other executive orders pushed by the administration, it may be tempting to consider this order relatively benign. I urge people to reconsider, because this order could result in catastrophic and irreparable damage to future generations of children. Move fast and break things is all well and good until the thing being broken is your child.
This move represents many of my fears coming to fruition, with all of the negative aspects I’ve been warning about becoming cemented into the foundation of future generations. You may have heard me talk about conditions such as cognitive atrophy, but early exposure to AI in education can lead to something far worse: cognitive non-development.
There are also technical concerns, including issues with security, privacy, alignment, and reliability. Children are rich sources of data wrapped up in easily manipulable packages, so it’s no surprise that tech companies are opening their AI tools to them. However, I feel these concerns are more evident to most people than the negative cognitive impacts that the introduction of AI to young children creates, especially while their brains are still developing and maturing. These are the issues I highlight here.
Key Points
Since this is a long article, I’ll call out a couple of key points:
Cognitive offloading by children and adolescents to AI short-circuits cognitive development impacting executive functions, logical thinking, and symbolic thought
We convert social to anti-social activities
The very skills kids need to use AI effectively never develop due to the overuse of AI
Core foundations of critical thinking, data literacy, and probability and statistics need to be introduced before any AI curriculum
Worldviews will be shaped by interactions with AI systems instead of knowledge, experience, and exploration
Kids need time to explore the generative intelligence inside their skulls
What Are The Hopes?
Before we begin, it’s helpful to take a step back and consider what the product of this education is supposed to look like. We envision emotionally balanced young adults exercising hardened critical thinking skills and ingenuity to create the next wave of high-tech gadgets. This is the stereotypical AI bro vision of an AI tide lifting all boats, but the reality strays far from the vibes.
There’s nothing fundamentally wrong with this perspective except that exposing children to AI tools beginning in kindergarten almost guarantees the opposite. This is for two primary reasons: the negative cognitive impacts on early childhood and adolescent development, and poor curriculum implementation.
Now, can this program succeed in a way that benefits children and empowers them for the future? Absolutely, but it would be nothing more than success by miracle. A program like this needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs and implements mitigations for these negative effects. This is NOT what we are getting here. This fails 999 times out of 1000, possibly more. Just read the wording of the executive order and imagine people rushing to implement it, along with the bros swarming like flies around a manure pile, anxious to pitch their half-baked products.
The introduction of AI and AI tools so early in childhood education will be yet another big mistake that everyone realizes in hindsight. To set the stage, many fail to realize just how much EdTech has been a failure, and now, without addressing any of the issues, we want to add even more screens in the classroom.
I don’t think everyone involved is a bad actor with perverse incentives. I think most people genuinely want to see children succeed and flourish. However, there is no consideration here for the long-term cognitive impacts on children.
AI In Education
While I was writing this article about AI in K-12, two other articles were released about AI in higher education. The article from New York Magazine about students using ChatGPT to cheat, and the story in Time of a teacher who quit teaching after nearly 20 years because of ChatGPT. The cheating article is creating a flurry of hot takes on social media. We’ve reached a technological tipping point where students don’t see the value in education. They want accomplishment and bragging rights (degrees) without effort. Apparently, attending an Ivy League school is no longer about the education you receive but the vibes you create and consume.
And of course, queue the defensive hot takes.
This is a common retort. The mistake of assuming low-quality Q&A for actual curiosity and insight. This information was available to us all along. It just required more friction to get. So, if this is the case, then the answers we wanted weren’t worth the effort. This is hardly an earth-shattering insight, yet we’re being pitched as though it is. Keep in mind, just because these people aren’t selling a product doesn’t mean they aren’t selling something.
As usual, Colin Fraser is on point.
A problem we’ve always faced is that we never know when we are learning something in the moment that will be valuable later. We exercise a stunning lack of current awareness for future value. This happens in all manner of experiences, but especially in education. Adults lack this awareness, and it’s completely delusional to expect that K-12 students will magically sprout this awareness.
We exercise a stunning lack of current awareness for future value.
There is value in learning things, even things you don’t use for your job. We seem to think learning is contained in individualized components that fit neatly into buckets, but there are no firewalls around these activities. Learning things in one subject is rewarding and beneficial, even to other subjects. Colin is also right about driving the cost of cheating to zero, a major point everyone seems to gloss over.
In his book, Seeing What Others Don’t, Gary Klein tells the story of Martin Chalfie walking into a casual lunchtime seminar at Columbia to hear a lecture outside his field of research. An hour later, he walked out with what turned out to be a million-dollar idea for a natural flashlight that would let him peer inside living organisms to watch their biological processes in action. In 2008, he received a Nobel Prize in Chemistry for his work. This insight doesn’t come from staying in your lane, being single-minded, or asking the right questions to an LLM. Yet, this is exactly the message thrust upon us. AI doesn’t provide the happy accidents that result from exploration and the randomness of life.
Using AI instead of our brains gives us the illusion of being more knowledgeable without actually being more knowledgeable. We shouldn’t underestimate the power of this illusion because it blinds us to certain realities. AI offers an illusion that completing tasks and knowledge acquisition are the same thing, but knowledgeable and productive are completely different attributes. This positive feeling of being more productive masks that we aren’t acquiring knowledge. Numbers end up overshadowing quality, and productivity vibes end up trumping learning.
Some may argue that productive is preferable to knowledgeable in a business context, but that hardly applies in education. The ultimate goal in formal education is to learn, not produce, with the PhD being the exception. Education shouldn’t be about creating useful automatons, despite how many business leaders may want them.
AI In K-12
Introduction in K-12 means that these tools are introduced during critical brain development and could short-circuit the development and maturation of things such as executive functions, logical thinking, and symbolic thought as students offload problems to AI systems. Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools. No matter what the AI bro impulses, we should all agree that exposing kindergarteners to AI is an incredibly bad idea.
Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools.
All of the issues and negative impacts I’ve been pointing out, such as the cognitive illusions created by the personas of personal AI, along with associated impacts such as dependence, dehumanization, devaluation, and disconnection, get far worse when exposed early in childhood and adolescent development because children never discover any other way. Blasting children with AI technology in their most formative years of brain development pretty much guarantees lifelong dependence on the technology. Something that elicits drooling at AI companies, but is hardly in the best interest of human users. What we consider overreliance today will be normal daily use for them. Worldviews will be shaped not by knowledge and experience, but by interactions with AI systems.
There’s something fairly dystopian about prioritizing AI literacy while actual literacy is on the decline , disarming future students from the very skills they’d need to keep AI in check. The impression seems to be that if you can teach kids AI, you can negate negative downturns in literacy. After all, why should something like reading comprehension matter if tools provide the comprehension for us through a mediation layer? Hell, why stop there? Why not apply AI to every task that could possibly be outsourced? We are close to creating a world where raw data and experiences never hit us.
The Future Isn’t Now
In their book AI 2041: Ten Visions for Our Future, Kai-Fu Lee and Chen Qiufan have a story about children who grow up and go through school with companion chatbots to assist them in life. These chatbots adapt to them and assist them in areas where they have challenges. AI systems are ever-present companions following them through school and in life. The story is meant to have the trappings of utopia, but ends up sounding like a dystopian hellscape. To make matters worse, their story considers a perfected AI system that doesn’t have all the issues and drawbacks of today’s AI systems.
We continue to make the mistake of treating the AI systems of today as though they are the AI systems of tomorrow. Encouraged into hyperstition and thought exercises of, “It doesn’t work, but just imagine if it did!” To say that AI will cure cancer and become the cure for all of humanity’s ails may likely turn out to be true, at some point. But these accomplishments have yet to come to fruition, and don’t appear on the horizon either. So, why are we treating these systems as if they’ve already accomplished goals they haven’t? The highly capable tutor/companions of Lee and Qiufan don’t exist, yet we want to apply this non-existent vision to K-12 education as though they do. Even if they did exist, where is all this highly personalized data about your child being stored, and what is being done with it?
Less Capable, More Dependent, and Less Stable
The crux of the issue is that this program will not set kids up for success in an AI world or otherwise. This early exposure will make them less capable, more dependent, and less stable. This curriculum could teach kids all the wrong things, such as that answers can be immediate and simple, and that working out a problem isn’t as important as asking the right questions. We also teach that learning is comfortable. We give the impression that knowing things is not as important as knowing where things are stored. This is all bullshit. Kids can’t summarize their way to knowledge. But, it gets worse.
Children exposed this early never learn how to do things for themselves. They end up outsourcing problems and decisions to AI. Instead of taking feedback on how to solve problems, challenging themselves to learn, they offload the problem to AI, making them incapable and lacking confidence in the absence of technology.
This technology dependence also creeps into their personal lives, meaning going about their typical day becomes unbearable without the ability to mediate through AI. It becomes a source of authority for them and a way to avoid difficult decisions that teach them lessons. It can be hard for us to imagine today the future paralysis created when the technology is absent, even for simple decisions like how to respond to a friend’s message or whether to go outside today.
Many adults may argue that this is a small price to pay for setting kids up for success in the future. There are two flaws here. First of all, this is a monumental price. Second, using technology more doesn’t automatically mean being better at using it. For AI use, the skills you learn outside of AI’s mediation are exactly the skills that make you better at using it.
We need to focus on teaching kids to use their brains, something I never thought I’d have to say when talking about… school.
This is typically when someone brings up the calculator, insinuating that nobody needs to learn math because it exists. Although I disagree, confusing a calculator with AI technology is a mental mistake. Calculators and AI are far from being similar technologies. A calculator isn’t a generalized technology that can be applied to many problem spaces. A calculator doesn’t provide recommendations, advice, or sycophantic outputs. It won’t tell you who to date or be friends with. Oh, and a calculator is always right, unlike AI.
The hypothetical response that gets pitched around is imagining if Einstein or Von Neumann had access to AI and all of the wonderful things that would have sprouted from their genius. Maybe, however, I pose a different experiment. Imagine if Einstein or Von Neumann were a product of AI education from a very early age, where even inane curiosities were immediately satiated by an oracle. The likely output is that nobody would know their names today. We are products of our environments. Remember, there are no happy accidents with AI, only dense data distributions in which everything is shoved. In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
Avoiding Discomfort
Sam Williams from the University of Iowa said, “Now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.” We are looking to apply this in K-12, specifically when we want students to grow.
The truth is, knowledge acquisition isn’t comfortable, and students avoid discomfort like the plague. When we use AI to complete assignments, we aren’t challenging ourselves. We aren’t developing our own perspective and forming new connections between concepts. Students find writing uncomfortable and are quick to outsource to AI, but writing truly is thinking. When we write, we are confronted with our thoughts and perspectives, challenging ourselves and forming new insights. One realization with writing is that the more you do it, the better you get. This realization never comes when it’s constantly outsourced to technology.
Using AI for work-related tasks may be helpful, but using AI for education or even life is idiotic. Yet, we continue to make these foundational mental mistakes. This would be like saying that since Taylorism worked for business, why not apply it to daily life? We all know where that leads.
But we also end up robbing students of a sense of accomplishment and fulfillment, of a long-lasting sense of satisfaction, not to mention the ability to focus. And for what? Because we believe that children will need to be non-thinking automatons to have a chance in the future? This theft will have a lasting impact on the mental health of future generations.
We may experience the extinction of the flow state by never allowing people to enter it in the first place. I’ve heard people argue that they’ve entered a flow state using AI, maybe, but likely the very nature of using AI to complete tasks guarantees that you never enter a flow state. Either people are confused about what a flow state is, or they mistake the illusion of productivity for creativity and flow.
As Ted Chiang mentioned in an article I’ve referenced before, ”Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Going to the gym isn’t comfortable, but the results are physically and mentally rewarding. The mental health benefits of going to the gym aren’t intuitive. After all, how can running on a treadmill or lifting weights, activities that work out your muscles, benefit your mental state? Yet, it does. There are no firewalls around exercise either. Knowing this doesn’t stop us from making the same mistakes in cognitive areas.
When Playing It Safe Becomes The Norm
Using AI to do things is perceived as safe because if the output is wrong, we can blame the AI, versus having to work out a problem ourselves and potentially being wrong. There’s a blame layer between us and the problem.
Let’s take art, for instance. AI art is safe, unchallenging, and unfulfilling, providing no opportunity to learn about ourselves, others, or the world. And yet, the very fact that it’s safe and easy is what makes it so attractive. Failure can result from the paintbrush, but never the prompt.
Failure can result from the paintbrush, but never the prompt.
The best things in life come from not playing it safe. Taking a chance on a job, moving to a new location, or asking a person out on a date are all activities that aren’t safe, but they can end up being the best decisions we’ve ever made. We need to keep this instinct alive in children.
Lack of Resiliency
The more we rely on AI, the less we question its outputs. The more we use AI and our capabilities atrophy, the less capable we become of questioning the outputs and, hence, the more dependent we become. We end up losing a critical capability when we need it the most, or in the case of early childhood exposure, never develop it in the first place.
Modern generative AI is far from error-free. It makes frequent mistakes and hallucinates. Students must construct the cognitive fitness necessary to operate robustly using a technology that makes these frequent mistakes. This fitness isn’t built on a foundation of the same AI that has these issues.
Students also need a foundation and the ability to explore outside AI mediation. This requires both time and foundational courses and concepts. For example, this foundation should include critical thinking, data literacy, and probability and statistics. Early exposure to these concepts with late exposure to AI offers the best chances for students to build this robustness.
From Social to Anti-Social
AI is a fundamentally anti-social technology. From the ground up, we are removing the human and converting it to the non-human. Even social networks are transforming into anti-social networks. With AI’s overuse in children we teach kids that humans are second-class citizens to AI. After all, the sales pitch is that AIs are better at everything, so why should children believe otherwise?
Handing kids an oracle to ask questions not only converts a social activity into an anti-social activity but also shifts authority away from humans and onto technology. This shift would still be bad even if the technology were perfected, but it is far worse given the error-prone technology of today.
Young children are quick to anthropomorphize and will form a bond with non-human companions. Although the video of the little girl not wanting to play with the shitty AI gadget is funny, it won’t last when children are surrounded by AI. Kids will switch from actively using their imagination to becoming passive consumers of AI output.
The human retreat has already begun, as kids prefer interactions with friends mediated by a device. But now tech companies want to take this further. This is all happening outside of education, but kids can’t avoid forced interactions with their companion/tutor/friend/bot in the classroom, reinforcing this retreat.
Much of this slide comes from our tendency to oversimplify, not accounting for the bigger picture and the complexities involved. Take, for instance, a common claim that kids ask many questions, and since AIs never tire of answering them, pairing kids with AI is a natural fit. This seems like an almost throwaway point, a gotcha to any potential critic, but people making this point haven’t thought it through.
First of all, asking questions is a social activity. We interact with other humans in different environments, learning far more than the simple answer to our questions. This activity teaches us essential skills, including ones related to non-verbal communication. Humans also don’t answer questions the same way AIs do, often providing additional context and anecdotes that may further aid us in knowledge acquisition and retention.
This act connects us to other people and the world, making us active participants in something bigger rather than passively consuming an answer. I still remember anecdotes shared from my high school chemistry teacher that stick with me today. We don’t just lose context and perspective from an AI oracle, we lose something human.
When it comes to context, any expert who has asked AI questions about their topic area has been confronted with incorrect information, including something like, “I guess that’s technically true, but it’s hardly the whole story.” And this is what we want to make the norm.
Closing The Curiosity Gap
We are told that asking an AI questions makes people more curious, but AI closes the curiosity gap. By getting an instant answer, we satiate our curiosity and move on to the next thing, only digging deeper or exploring further in cases of pure necessity. This act reinforces low attention spans, further reducing the ability to focus. At some point, System 2 may become extinct. What kind of world will that create, where the world is nothing but hot takes and vibes?
AI satisfies a need for quick answers. However, searching for answers in a more traditional way means other pieces of valuable context surround you. Other rich pieces of information that lead to new ideas and new understanding. Humans have an evolutionary need for exploration.
When using AI for exploration, you are never exposed to ideas and concepts you don’t want to be exposed to. I don’t think we fully grasp just how much of an impact this selection bias will have on the future.
Sure, there are situations where a quick answer is perfectly fine, mundane things like what time a movie starts or what temperature to set your oven to cook a pie. The mistake here is assuming these situations apply evenly to all problem spaces, especially knowledge creation.
My Recommendations
Despite the many unknowns, we shouldn’t shut the door to new innovations because we could slam the door to new solutions. Although it doesn’t exist today, a robust tutoring bot focused on a single purpose and specific subjects could benefit students. The message here isn’t to discard everything but to be cautious, knowing there are tradeoffs and downsides, and incorporate mitigations.
For a program such as this to be successful, it needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs. Without this, you have no way of telling whether you are helping or harming until it’s too late. There is no way to succeed without this step. Beyond this up-front work, I’ll make four other suggestions.
Avoid Early Exposure
Students need plenty of time to develop their brains, not technology. Early exposure should be avoided at all costs. Exposure to this curriculum should happen in high school, preferably in the last two years, not earlier. This is typically when vocational education programs were introduced in schools as well. This gap gives students time to develop skills and experiences outside AI influence and mediation. Kids adapt to technology quickly, so this later exposure will not stunt their capabilities when tools are introduced.
Create A Prior Solid Foundation
Before introducing the AI curriculum, a solid foundation in various topics should be established. This foundation should include courses in critical thinking, data literacy, and probability and statistics. These courses and concepts have been sorely lacking in K-12 education today, and their introduction is long overdue. Arming students with this foundational knowledge will allow them to question the outputs of these systems and create defenses for cognitive creep.
Smart Implementation
The implementation of the courses should be isolated and away from other topics. AI shouldn’t be woven into every topic with a tie-in. Although some would argue that an effective AI tutor could help students struggling with certain subjects, these systems have yet to be developed, much less proven effective. In almost all cases, the AI would be used as an oracle, providing answers directly instead of the necessary understanding and even discomfort that helps students grow.
Solid Curriculum
The curriculum should focus on challenging students, not giving answers. Kids often don’t realize when challenges are beneficial to them. AI tools should continue to be viewed purely as tools, not oracles or companions. The curriculum should focus on avoiding usage as personas and teaching kids how to think in terms of solutions. Appropriate labs should be constructed that give students the ability to explore concepts and define solutions, pulling AI tools in secondarily to complete the tasks and realize a student’s vision. This way, there is a separation between the mental approach and the AI components.
Final Thought
Ultimately, we may end up with anti-social, dependent, and unstable young adults. We take so many skills for granted, skills we don’t realize we developed and honed in school, and now we want to apply technology to optimize these attributes away. We need to give future generations a chance to allow their brains to develop outside of AI mediation. Here’s something to consider.
Imagine an art teacher standing in front of a class. The students aren’t in front of an easel or grasping a pencil, but sitting in front of computers. They aren’t using their hands and tools to create a vision that originates from their minds. Instead, their fingers clack on the keyboard and echo through the class as the teacher instructs them to be more descriptive and provide pleasantries to the machines. Is this really the world we want to immerse children in?
We are moving toward an existence where raw data and experience never hit us as everything becomes mediated. We prefer optimization over expertise. I’m sure the illiterate masses of the Middle Ages felt powerful after leaving a sermon by the literate priest mediating the message of the written word, but that was hardly the best state for individuals. Now we are applying this logic to AI with far-reaching consequences for the everyday life of an entire generation.
In the words of Aldous Huxley, many may mature to “love their servitude,” preferring optimization and rigid structures that take decisions off the table, making things easy, not requiring thought. In Zamyatin’s We, most inhabitants enjoyed living in One State with its rules, schedule, and transparent housing. They were happy to trade free thought and experiences for optimization, comfort, and structure. It needs to be said, over and over again: These are dystopias, not roadmaps.
One of the oft-repeated talking points erupting from the mouths of futurists and tech leaders alike is claiming that things will cost nothing in the future. As if we are to believe all of these people are in the business of making something for nothing. The entire claim is a gross absurdity that charlatans like Ray Kurzweil conjured out of thin air, and others parrot at every opportunity. This claim is made with such confidence that it is rendered self-evident, and to question it means you are an out-of-touch dolt lacking the religious fervor necessary to create the techno-utopia.
But these responses are a smokescreen to dispel the very rational questions this claim evokes. None of these people can explain exactly how this will work in practice or are willing to admit just how bad things will get, which seem like consequential details to omit considering the plan to rework the social contract of most of the world.
The claim promises us a Fully Automated Luxury Communism (FALC) where all of our needs are not only met but propels us into a life of luxury. However comforting the concept, the reality may be closer to Fully Automated Digital Breadlines (FADB). I know, how dare I poo-poo the utopia.
The False Choice
We are often given a false choice. We are told that if we don’t allow companies carte blanche to raw-dog technology all the way to utopia, then humanity will vanish. Either grow or die, as the mantra goes. Given this, a minuscule number of people are trying to rework the social contract and reimagine society without society’s input.
We can have cures for cancer and other illnesses without destroying art, stealing people’s work, or removing humans from the creative process. However, curing cancer is a hard problem, and imitating humans is easy. So we get AI slop machines instead of cures for Alzheimer’s.
Maybe I’m just an idiot, but I fail to see how LLMs will make humans immortal. Immortality is one of the many promises if we only just let it happen, even though there’s absolutely no evidence for this.
Also, if you read Andreessen’s Techno Optimist Manifesto from October of 2023, you may notice his crediting of Filippo Tommaso Marinetti, the author of the Fascist Manifesto. Marinetti was a futurist, but his position as a futurist and someone who had complete disregard for the past led him to embrace fascism as a logical vehicle for technocracy.
Don’t get me wrong, there’s plenty in Andreessen’s manifesto that I agree with. We are a society built on technology, and this has brought some of our greatest achievements. There certainly are regulations that seem pointless and get in the way. There are groups inside organizations that have become politicized and create unnecessary obstacles. I also agree with the critique of communism. These are all true. However, Adreessen’s mistake assumes that multiple things can’t be true simultaneously.
Even though these are extreme views that some have labeled techno-authoritarianism, understand that they are the average view of the e/acc community. Andreessen also invokes the perils of communism multiple times while also driving humanity into techno-communism, but to each their own, I guess.
I love technology and believe, as Andreessen does, that technology will deliver the best future. It’s because of technological advancement that we’ll cure cancer and reduce suffering around the world. However, I don’t believe a better society results from discarding ethics and principles and disregarding voices different from our own in the pursuit of generating a cornucopia of innovation porn. We in technology seem to constantly make this mistake, only to be disappointed by our ignorance of the complexities of the real world and the jobs and perspectives of others.
Ethics and principles aren’t obstacles or roadblocks. They are guideposts that ensure what we build aligns with our values and vision of the world we want to create. We, in this case, meaning society as a whole and not just a couple of dudes sharing technology with their friends.
The Claim
If you have escaped these claims, here’s a recent example from Marc Andreessen below.
You read that right. We need to hurt you before we can help you. It’s the sort of pitch you’d hear from a sadistic boyfriend who insists he needs to tear a partner down before building them back up. The we have to break it before we can fix it mantra is applied to almost everything, including humans and the environment. This is the core premise of the Effective Accelerationist (e/acc) movement.
But Andreessen is hardly the only one making these claims.
That’s right, Google is in the business of giving you things for free. We’ve learned this lesson a long time ago. Yes, Google makes its money off of ads for its “free” services. However, in a future where things are worth nothing and people don’t have an income stream, it seems likely that advertising budgets will be zero as well.
I blame much of this on Ray Kurzweil. For years, he’s been peddling this nonsense. I addressed this very same claim in my post on his latest book in the “Things Will Cost Nothing” and “Jobs and Wages” sections of the article. Despite this, I wanted to explore this topic further.
These people claim we shouldn’t worry about losing our jobs to AI because AI will make companies so good that goods and services will essentially be cheap or free. But both on the surface and upon reflection, the claim is absurd.
Nobody can describe exactly how this is supposed to work other than sprinkling everything with AI magic. When someone does make an attempt, like Ray Kurzweil, for example, the explanations make no sense, don’t address the questions, and highlight how little about the real world these people know.
For years, I’ve been pushing back against the phrase, “AI won’t replace people. People with AI will replace people without.” This is just patently false. The moment AI is good enough to take our job, it will. I mean, it doesn’t even have to be that good.
So, no job, no income. This is our baseline. It doesn’t matter how cheap things get if you have zero.
But AI, Tho
Before we get too far, let’s address the counterargument. For all the issues I’m about to raise, the answer is, “But AI, tho.” The response involves invoking the name of AI like a magician conjuring a spell. We are told that AI will be so great and powerful, rising to the status of deity, and no matter what the encountered issue, AI will figure it out. But merely spouting an incantation doesn’t make it a reality.
This answer is a complete copout that leaves the questioner unsatisfied. Whenever someone invokes the But AI, Tho defense to real questions, continue to ask them for more specifics. Don’t allow the oversimplification of a vast and complex problem space. AI isn’t god, and they aren’t prophets.
The “Sucks To Be You” Gap
Remember, we need to be broken before we can be fixed. This means there will be a gap between the damage incurred and any mitigation strategies. I call this the Sucks To Be You gap. There is no telling how long this gap will stay open or what mitigations will be implemented to remedy it.
Unemployment is unlikely to hit something like 90% all at once. This would mean that early people displaced by automation would be the most harmed since they would be unable to support themselves and their families and have no real recourse for their situation. How long will this drag out? My guess is years, possibly a decade or more, depending on how slow the adoption is and any difficulties implementing mitigations.
The amount of harm caused by this gap is unfathomable. This gap brings pain, suffering, and death. If you think I’m being dramatic, think about it for a moment. Imagine the mental toll this takes on someone trying to provide for themselves and their family. This isn’t a matter of re-skilling. Even if people did re-skill, the competition for remaining jobs would be astronomical, with thousands of applicants for a single position. This isn’t p(doom) it’s p(shit).
It’s easy to see how self-harm could result from this situation, but that’s not the only scenario where mortality is concerned. Not working leads to a lack of benefits, meaning you can’t make co-pays on doctor visits and prescriptions. This doesn’t include all of the potential harm from algorithmic decision-making mistakes. Deaths will result, and we know this because we’ve seen it happen on a smaller scale with people not being able to afford insulin.
No Intelligence Explosion
All of the claims of a near-future techno-utopia are predicated upon an intelligence explosion. This is the condition in which AIs will recursively improve, creating even better AIs that morph into superintelligence. Advocates claim this attainment of superintelligence fuels this world of comfort and abundance. But what if it doesn’t manifest this way? What if we get the Diet Coke of AGI? Just one calorie, not intelligent enough.
The assumption is that superintelligence brings massive productivity gains, but what if, instead, we get algorithms that are purely good enough, leading to human workers being displaced and productivity staying relatively the same? For example, an agent can work 24 hours a day, but what if that 24-hour-a-day agent produces the same productivity as a human working 8 hours a day? This could happen because of needing to account for errors, wait times for additional reasoning, running tasks multiple times, and other issues relating to the complexities of seemingly simple tasks. It’s easy to see how this can stretch out when we factor in additional difficulties of completing complex tasks.
This human replacement could result in cost savings but would be far from driving costs to zero. Also, this would be less of a complete human replacement and more of a human staff reduction. Now, you have a displaced workforce and a company with similar productivity. This doesn’t seem like a recipe for a utopia. It’s a recipe for problems.
This is a very real possibility, especially given all of the hype around LLMs. I know everyone is losing their mind about DeepSeek at the moment, but I don’t believe LLMs are a path to AGI, much less ASI. However, it’s important to realize that we don’t need this level of intelligence to apply these technologies to specific tasks successfully. It’s entirely feasible that a company would take a shitty LLM with repeatable failures over a human worker if they could save money.
What’s The Point of Things Costing Nothing?
I’m not sure anyone gets out of bed in the morning with dreams of creating a company that delivers goods and services that cost nothing. It’s even absurd to say out loud, so you might wonder why people at the largest companies in the world are making this claim. Investors are the same way. Nobody is investing in a company so they can deliver zero-cost goods and services. In the Sucks To Be You gap, the first affected suffers the most harm, but the opposite happens with companies.
Nobody is investing in a company so they can deliver zero-cost goods and services.
Tech leaders and investors aren’t considering what happens to their companies after this so-called intelligence explosion. They are thinking of all the money they will make leading up to it. This is why these staunch capitalists are so comfortable forcing everyone into techno-communism. Now that I think of it, the thought of an algorithmic Stalin hunting kulaks is terrifying.
Stagnation
Counterintuitively, this condition could lead to stagnation. The very opposite of what proponents claim. This doesn’t strike me as a competitive environment where companies and people are stepping up to create new solutions due to a lack of incentives. I guess someone could make the argument that people’s lives will suck so bad that they’ll be incentivized to create something better. Fair enough, but it seems like these bigger initiatives would cost more money, putting them out of reach by these very people. Not to mention, this is an odd flex for the techno-utopians. “Your life will suck so bad you’ll be dying to create something better.”
The price of stagnation for a majority of the population is that they remain in the mire of the Sucks To Be You gap for a much longer time. Even if basic necessities are met, it will be miles away from a good life, much less luxurious.
Things Will Still Cost Something
The core premise of the argument that things will be zero or low cost is absurd on its face, so much so that it’s remarkable that nobody seems to push back. A whole host of things won’t be free or low-cost. Consider rent and property, the means to generate electricity, medical treatments, and, most importantly, food. Even extracting and refining raw materials is going to cost something. Imagine being monitored every moment with everything in your home subscription-based, requiring a micro transaction for nearly everything you do. Now, that’s the utopia we’ve all dreamed of!
Regarding food, Kurzweil claims that advancements in vertical farming will make abundant, nutritious food freely available. This highlights Kurzweil’s cluelessness on a variety of topics. Vertical farming took a hit last year, making MIT Technology Review’s list of the worst tech failures of 2024. Score another “L” for Kurzweil.
As I mentioned, companies and investors aren’t in the business of giving things away for free. These companies will adjust to the conditions imposed upon them. When have we ever seen a company that gets hit with higher taxes or additional tariffs responding with, “Well, sucks to be us. I guess we’ll have to make less money now.”
This condition may level out at some point. After all, if nobody has any money to buy your products, that’s not a good business strategy either. I’m saying that this leveling out could take some time, especially if a segment of the population continues to remain employed.
New Risks
New architectures, technologies, and automated processes will bring new risks. Due to our complete dependence on these systems, these risks will have a much larger direct impact. The vertical farming example is instructive because it raises new risks and considerations. For example, damage can spread quickly in these new architectures, creating cascading failures.
In reality, the company’s lettuce was more expensive, and when a stubborn plant infection spread through its East Coast facilities, Bowery had trouble delivering the green stuff at any price.
And this is just one of the many potential examples. Whenever potential challenges such as this are raised, the But AI, Tho defense is invoked as some sort of benevolent deity here to deliver our salvation and absolve us from our sins. “AI will just figure it out.” This is not an answer.
Techno-Communism and Techno-Welfare
Let’s acknowledge that these companies aren’t willing to part with their money. It’s not like they will be so successful that they’ll start sharing their profits with us. Even if they half the cost of goods and services or even reduce by 90%, we’ve got zero dollars, which makes these cheap necessities still out of reach. This begs a couple of questions.
How do companies make money from people who don’t have any?
It seems unlikely to be profitable in this environment, so companies raise prices for those who can afford their products to cover gaps. This situation actually makes it worse for displaced workers, as I mentioned previously in the adjusting to market conditions section.
What’s the remedy?
Some have proposed an automation tax that funds a Universal Basic Income (UBI) program. This sounds good on paper but may not be so great in practice. We will tax people who are making less money; hence, there will be less recovered in taxes. Not to mention, I’m only considering the United States here. What about goods and services from other countries? After all, we have a global economy. This requires tariffs on goods and increased taxes on digital goods, which will require companies to raise costs even more.
There is the impression that the techno-welfare provided by some universal basic income will have us jet-setting around the globe. This is the premise of Fully Automated Luxury Communism (FALC). This is flat-out bullshit when you consider the realities on the ground. UBI’s benefits are a social welfare program and will be commensurate with similar programs.
Nobody on a social welfare program lives it up on their yacht, sipping champagne and wondering when their Ferrari will be out of the shop. These people worry about basic necessities constantly. Any small hiccup can result in major consequences. This future techno-welfare program will be far more like today’s social welfare than some government-funded luxurious lifestyle. So, yes, it is much more like Fully Automated Digital Breadlines (FADB) than FALC.
Not to mention, this very same social welfare program will be administered by the very system systems that displaced these workers in the first place, leaving the door open to a whole host of technical issues and challenges that will affect the people in the program, adding to risks.
The thing that pisses me off about people like Kurzweil is that the very foundation of their arguments is not only so disconnected from reality that they don’t make sense, they are dehumanizing. But for people like Kurzweil, this is a feature, not a bug.
The response to hungry children comes off as, “Just shut up and eat your amino acid paste, you ungrateful little shits. Don’t you realize how much more compute you have access to? You couldn’t even run stable diffusion locally when I was a kid!” When you are hungry, it’s hard to eat your computer.
Reduced Agency and Helplessness
What does it mean to be human in an age without work and agency? Do we resign ourselves to being helpless and needy? This is hard to pin down in advance. Humans are indeed incredibly adaptable creatures, but there’s a limit to this adaptability. But more importantly, why should we settle for this vision of the future?
These systems turn us into robots, shoving us into predictable buckets, reducing our agency, and making us dependent. This is necessary to increase the accuracy of predictions. The result is we end up as helpless schmucks standing on the sidelines, waiting to be told what to do and where to go at the mercy of every algorithmic decision. Technology should work for us, not the other way around, a point that gets lost in the shuffle and hype.
With every new risk that surfaces, we’ll be helpless to intervene. We need to take it on faith that what we built will automatically do something about it, as the world we construct becomes far too complex for us to understand. In some instances, humans may not be informed of impending dangers due to their lack of ability to do anything about them. We remain blissfully aware until the asteroid strikes.
We should insist on better. We deserve something better—technology that works for us, not us working for technology.
Technological advancements require tradeoffs, which will benefit humans as a whole. For example, suppose self-driving cars worked as advertised and delivered on promises. In that case, giving up manual driving for the benefit of safer roads may be a worthwhile tradeoff that most of society accepts. However, today, we are being asked to pre-purchase a tradeoff where it’s unclear what we get and what we lose.
Does This Sound Like Utopia?
I don’t know about you, but this scenario doesn’t sound like a slam dunk in the utopia basket. At best, this sounds like human-forced retirement with a monumental cut in income and benefits. At worst, it’s suffering and death, far from the promised life of luxury. It likely won’t be either of these extremes, but it will be something like a Fully Automated Digital Breadlines scenario I mentioned where the role of humans is needy and dependent.
I’m not sure exactly where I fall on the utopia scale above except to say I am probably not in the upper half. Not a precise measure other than to say away from the luxury lifestyle.
Can we achieve artificial superintelligence quickly and solve the world’s problems by creating a world of abundance? Yes, it’s certainly possible that everything snaps into place perfectly, and governments and corporations work hand in hand to create a world of abundance free from suffering. Possible, just not probable, or at least probable in a reasonable amount of time. For this to be the winning scenario, things must work perfectly the first time with advancements free from issues. We should know from history this is rarely the case.
Even if we eventually reach a reasonable utopia, we’ll have years, if not decades, of pain and misery as humans do their best to adapt and deal with less-than-perfect technology, governments, and companies. All of these challenges are incurred by humans while simultaneously being stripped clean of our agency and purpose.
By some estimations, communism is responsible for 100 million deaths in the twentieth century. Although some dispute this number, even on the lower side, we’re still talking about 50 million people. But hey, what’s 50 million deaths among friends? Something about one death being a tragedy and a million being a statistic. And yes, I know Stalin didn’t say that, but it’s relevant here.
Although I don’t think techno-communism will cut that wide a path, I do believe that some will view resulting deaths and misery as the cost of progress. However, progress is subjective, and despite often being linked, innovation and progress aren’t the same thing.
Conclusion
I hope that none of my predictions come true, that I am wrong, that some fluke happens, and everything magically snaps into place without issue. Thankfully, much of the hot takes on social media can be written off as bros sharing vibes. Also, I don’t think the current crop of LLMs will cause mass unemployment, create large destabilizing effects in the workforce, or create immortality. However, I’m not as confident about this prediction, well, other than the immortality piece.
The real question for LLMs is how much better this buggy, insecure, black-box technology needs to get to start disrupting a larger part of the workforce. We’ve seen this happen in the creative domains, but the cost of failure is low in these use cases. Let’s hope there are no plans to hook ChatGPT up to air traffic control or the nuclear arsenal, but there are still plenty of other jobs without such high failure costs. Only time will tell.
The attempt by a few to change the social contract raises many questions: Who sets the rules? Who changes the rules? Who or what makes the important decisions affecting humanity? These are good questions to have answers to before wading into the slough.
This situation can’t be described as a Faustian bargain since most people won’t gain any true advantage. At least Robert Johnson received amazing guitar skills. Many of us will get digital breadlines and an endless feed of slop.
One constant throughout the generative AI craze is summarization. Why read a book, listen to a podcast, or YouTube video… Just summarize it! Large swaths of content, distilled into several bullet points with countless hours saved. However, this isn’t the utopia many claim.
We all love a good shortcut. Humans are wired for them. This is why we are so good at cognitive offloading, but the tradeoffs from shortcuts are never recognized or shoved deep into our subconscious. Every shortcut has tradeoffs. With generative AI, tradeoffs are never acknowledged or discussed. However, here’s an inconvenient truth: knowledge and understanding aren’t generated from bullet points.
Fake Optimization
Many of the claims made by influencers, transhumanists, and the e/acc community revolve around fake optimization. Fake optimization claims that something lowers friction for a task or activity while not providing the same value.
So many things in this world require friction for success, especially knowledge and understanding.
These people see everything as a game of lowering friction, but there’s just one problem. So many things require friction for success, especially knowledge and understanding. To go further, there are many activities where the friction of the activity is the point, such as art or meditation. However, telling people that won’t get clicks and someone’s “thought leader” badge may be revoked. So we end up with the environment we have today, with everyone from tech leaders to influencers telling people friction is about to be a thing of the past.
Take this example of promising people they don’t have to put in the work and still gain the benefit. Anyone claiming you can gain the same value from cramming three hours into three minutes demonstrates a fundamental lack of understanding of how knowledge transfer works and a near-religious level of faith in AI.
If we step back, people listen to content like podcasts for two different reasons: entertainment and information. Quite often, it’s a combination of both. So, by summarizing, we’ve removed all of the entertainment factor, immediately reducing the value of an activity. However, before we get too far, let’s examine a scenario that should be obvious to people.
Imagine summarizing a one-hour stand-up comedy performance. “Just tell me the best jokes.” Is that really an hour saved? Of course not. It won’t be funny, and anyone who thinks differently has been sitting behind a computer screen for too long. We instinctively know that comedy is situational and relies on context and delivery. Comedians like Mitch Hedberg prove this point.
The comedy scenario is obvious for most to understand. However, what’s difficult to understand is that a similar value loss also exists for non-entertainment activities. Summarization isn’t the shortcut people think it is. Without the surrounding context, we may not be committing these summaries to memory, where we can take action on them or put them to use.
Thinking Deeply
There’s no thinking deeply about bullet points or summaries. You can’t. This is because the action of summarizing strips away all of the context. For thinking deeply, the context is key. Summaries are just a set of condensed words shoved into a predetermined space. Important bits of information (sometimes the most important bits) are left out. There’s no way they can’t be.
There’s no connection to bullet points and summaries, no deeper meaning, emotion, or content to chew on mentally. Nobody contemplates something deeper or dreams about something bigger with summaries. The same can’t be said about reading a book or other longer-form content. The inherent dehumanization of summaries drives some of this lack of connection.
In summarization tasks like these, we take someone’s uniqueness, including their perspective, delivery, language, and flair, and crush the life out of it to get the resulting bullet points. This act results in a shift. Instead of viewing someone as a person, we view them as data or a product to be manipulated, and summaries strip humanity away, leaving us with several cold sentences generated from the compactor of a black box.
Make no mistake, the dehumanization aspect is a selling point for many. The human aspect is often seen as flawed, whereas the AI aspect appears superior. But this perspective doesn’t serve us well—you know… we humans—especially when it affects our ability to think deeply.
There can be rare exceptions where a quote or simple statement does cause some deep thought. For example, this quote is often attributed to Einstein, even though he never precisely said these words.
“If you can't explain it simply, you don't understand it well enough.”
A statement like this can trigger deeper thoughts about ourselves and our view of knowledge. As a theoretical, let’s pretend Einstein was on a podcast and uttered this statement, making a larger point about knowledge and understanding. Mediated through an AI system in a summarization task, this statement could be transformed into:
“You need to explain things simply.”
The difference between these two examples is stark, and they do not even remotely mean the same thing. There’s certainly nothing to think more deeply about in the second example.
The ability to think deeply about any topic is a skill we are losing fast and for younger generations, possibly never cultivating in the first place. Our modern world, filled with its distractions, is not only pulverizing our ability to ponder, to wonder, and to dream, but also to question.
The act of questioning requires effort and friction. It isn’t purely asking a question to an AI system and getting a response because the act of questioning isn’t easily satisfied. Don’t let people reframe them as equal. We will not be better off for it.
Context, Value, and Illusion
In reality, longer-form content can be bloated. I’ve read books that should have been four chapters and podcasts that could have been reduced to thirty minutes. However, it’s a mistake to consider context as bloat and an even bigger mistake to assume an LLM knows the difference. This is because you often can’t tell the difference until after the fact. Something that seems like bloat at the beginning is context in the end. That pointless story turns into a connection reinforcing a particular point.
It’s a mistake to consider context as bloat and an even bigger mistake to assume an AI knows the difference.
Let’s consider the importance of context for a moment. Consider something larger, such as a slide deck from a presentation. There are not just several but many bullet points along with images and diagrams. If you are already an expert on the topic, it may be possible (but not always) to glean something from the slide deck. However, the real value is the context in which the content was delivered and the commentary around it. Conversely, if you watched the presentation and had the context, the slides are helpful because they can reinforce the content and even jog your memory. This is true for all sorts of content.
You may be convinced (or not) by a set of built points or summaries, whereas hearing the whole argument would have proved otherwise. In life, we say it’s all about context, but context is what we discard when we summarize.
Also, even for general accuracy, the act of summarization strips away all of the supporting or disproving elements, leaving us with a couple of sentences that may or may not be important. Without the context, how do you know if a point is accurate? You have to blindly trust the system.
One of the most commonly encountered bits of summarization is survey results. Most people never dig into the details of surveys or studies, but this is where you find issues. These are problems with the approach, sample size, sample diversity, and many more pieces of context that may cast a shadow over the results, transforming those groundbreaking results into more questions than answers. Summarizing everything leads to many misunderstandings.
We spend little time evaluating the proposed value from summarization. We are told we can spend far less time yet gain a commensurate level of insight from summaries. This value proposition speaks to our modern low-attention-span world, but if we take a step back and consider the realities, it just doesn’t jibe for the reasons outlined in this article.
Much of this disconnection stems from a lack of presence. We need to exercise a certain amount of presence to read a book or join a meeting. However, this is becoming a lost skill. New technology promises we no longer need to be fully present again, but there are consequences in nearly all contexts. This is why the Illusion of Presence is one of my cognitive illusions created by personal AI personas.
Unfortunately, we do end up fooling ourselves. Using an AI to summarize content for knowledge gives us the illusion that we are working smarter and creating more knowledge with less effort, but as we’ve seen, that’s not the case. The reality is a world of summaries creates a world of fools.
A world of summaries creates a world of fools.
Although harsh, if we consider what we’ve already discussed, it makes sense. Not only are we not gaining the promised value from activities, but we also fool ourselves into believing we do.
AI Mediation
AI mediation is both a bug and a feature. What we want out of content may very well be in the dense center of some data blob. However, something must be said about getting all of our information mediated through an AI system. So much of our world is already mediated by algorithms, and we aren’t exactly better off for it. We are pushed and nudged in various directions, making us more predictable, with all of us shoved toward the dense center of a distribution. But what you don’t find there are uniqueness, creativity, or innovation. Sparks, inspiration, and innovation don’t come from bullet points, although you are certainly being sold on the opinion that it can.
Ultimately, we leave it up to an algorithm to determine the main points, the most important things we should pay attention to. A black box plucking data points with some higher purpose that nobody understands. Many times, the points being distilled may very well be the most important, but certainly not always, and without context, it’s impossible to tell.
Ultimately, we need to ask ourselves a question. How many filters do we want between us and reality? Using AI for mediation is yet another filter on top of reality. We should work to remove filters in places where the activities are important to us.
I’m not trying to overplay the dangers here. You certainly won’t be hurt by occasional summarization tasks with an AI system. However, when used often, there is not only a value mismatch, but it can also warp our understanding of reality. So, there are consequences.
Wasted Time, Not Optimization
The funny thing is we don’t even ask ourselves if the time spent is worth it. Let’s say we cut down on reading time to generate summaries instead. This way, we can cover more ground on more topics. Many may consider this a solid strategy. Subconsciously, this also feels right, which makes it a powerful argument. This is why influencers are so fooled by it. However, when we dig deeper, it’s not the benefit it seems.
So, in the three hours to three minutes optimization sale, you lose time. The three minutes are wasted because you never had the content reinforced with the surrounding context. It becomes bullet points scrawled across a mental billboard as you drive past at 120 mph. Of course, this assumes that the content distilled wasn’t so generic to be a waste in the first place.
Say, for instance, that we use AI to summarize Peter Attia’s book Outlive or possibly one of his podcast appearances. One of the summary bullets may be:
Put a larger emphasis on Zone 2 training.
Okay, but why? What is Zone 2 training? How do I do that? Answers to these questions were covered in the surrounding context, but now you spend extra time tracking down the answers.
Multiple people have already joked that we are on the cusp of someone writing something based on bullet points only to have the other person convert it back to bullet points. There’s something rather dystopian about this.
If something is worth learning, then it’s worth spending time on. This was true in the past and will be true in the future.
Conclusion
There are no shortcuts to creating knowledge. Knowledge generation always takes friction, but through this friction comes reward. When we take shortcuts, we deprive ourselves of the reward, leaving us with a hollow task that doesn’t provide the same value. Ultimately, nobody gets smart from bullet points.
I’m not claiming all summarization tasks are bad. They may be helpful and fine for task-based systems and under certain conditions. But they are not for generating knowledge and understanding. It’s becoming increasingly obvious that we must defend our cognitive functions because nobody else will.
In just a few short years, we’ll not only have achieved AGI but live in a world of abundance. Physical goods will be so cheap that they’ll basically be free, and we’ll be able to 3D print anything we’d like. One can only assume with their free 3D printer. We’ll connect our brains to the cloud and have seemingly endless compute, which will also essentially be free. We’ll have cured our illnesses, created replicants, and become immortal. This is but a taste of the nonsense peddled by Ray Kurzweil.
In his new book, The Singularity is Nearer: When We Merge with AI, he pushes a couple of themes. One is that all technological advancements will be universally positive. Second, the only way humans can ever hope to compete is by fully merging with technology. I have issues with both of these themes. This post attempts to address only a tiny amount of the BS in the book.
Ray Kurzweil
If you are unfamiliar with Ray Kurzweil, he’s someone propped up by many as the preeminent futurist. I recently caught one of his appearances, and his ramblings elicited a noticeable grimace from me. I must admit, I wasn’t familiar with his unique brand of absurdity. I knew he’d said some wacky stuff in the past and had a book about the singularity, but I didn’t pay him much attention. After this interview, I purchased his new book, The Singularity Is Nearer: When We Merge with AI.
The strange thing about Kurzweil is the way people treat him during interviews. I haven’t really seen anyone push him on his asininity. When interviewers attempt to question him, he makes up more nonsense and says things like if people can live 20 more years, they’ll be able to live indefinitely or that things will be free in the future, avoiding the question altogether.
The book demonstrates how disconnected and out of touch Kurzweil is from reality. But it also highlights a bigger problem. As long as tech is allowed to be presented as magic, charlatans and hucksters will run rampant. This is the playbook that Kurzweil exploits. Unfortunately, I don’t have the time to address all of the issues with the book, but I will point out a few things that stood out to me.
As long as tech is allowed to be presented as magic, charlatans and hucksters will run rampant.
Before We Start
A few thoughts before we begin. If you read the reviews of this book, they are overwhelmingly positive. I’m sure many people won’t care for this post. Kurzweil is a famous tech personality with multiple books, TV appearances, and impressive credentials. I’m a nobody security researcher. I’ll never be in demand like Kurzweil or sell as many books as him, so I’ll have to cry myself to sleep at night with my integrity intact.
After all, he was just listed on the Time100/AI list, which caused me to laugh out loud. Then again, we live in a performative age, and Kurzweil is a performer.
However, I’ve spent my entire career analyzing risks and envisioning future threats to technology, something Kurzweil is oblivious to or completely ignores. Neither is a good scenario.
It’s also important to know that Kurzweil has been wrong many, many times before. I stumbled upon this old Newsweek article from 2009, which had an amazing quote.
P. Z. Myers, a biologist at the University of Minnesota, Morris, who has used his blog to poke fun at Kurzweil and other armchair futurists who, according to Myers, rely on junk science and don't understand basic biology. "I am completely baffled by Kurzweil's popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination," writes Myers. He says Kurzweil's Singularity theories are closer to a deluded religious movement than they are to science. "It's a New Age spiritualism—that's all it is," Myers says. "Even geeks want to find God somewhere, and Kurzweil provides it for them."
The author even made a midlife crisis joke and another person accused him of trying to start a religion. Fifteen years, and not much has changed.
Let me also say that given enough time and technological progress, just about anything is possible. I think this is something that everyone innately knows. However, people like Kurzweil exploit this instinct for their benefit, running up the clock and leveraging the hype. We should be aware of this trick when evaluating claims.
Why Write This?
You might ask, why would I dedicate time to writing this article out of all the other things I could be writing? Indeed, I’d rather be writing something else, but as I was sketching my thoughts for this post, I read an article with the following quote.
“A colleague of mine, without a hint of irony, claimed that because of AI, high school education would be obsolete within five years, and that by 2029 we would live in an egalitarian paradise, free from menial labor. This prediction, inspired by Ray Kurzweil’s forecast of the “AI Singularity,” suggests a future brimming with utopian promises.”
THIS is why I’m writing it. These predictions powered by Kurzweil are fabricated bullshit. Let me go on record and say we won’t have AGI by 2029 or a utopia. Now, I’m not delusional in thinking that I would have nearly the reach needed to make a dent in Kurzweil’s impact, but I’ll reach a few people and get this off my chest. So, let’s dive in and call it like it is.
Kurzweil is a BS Artist
If I had to summarize The Singularity Is Nearer, I’d say it’s the ramblings of an aging gentleman confronted with his mortality, hoping that wishful thinking and vibes are enough to speed the tech he imagines into existence. It’s a book of absurdity wrapped in historical and disconnected examples attempting to give Kurzweil’s bullshit credibility. Even the title of the book is a sleight of hand. Sure, everything is nearer than when his previous book was written, but that doesn’t mean it’s close.
Another obvious fact on display is that if Kurzweil found himself down at the crossroads, we know exactly what he’d sell his soul for. He wants to become a robot so badly that he’s willing to shed every bit of his humanity to get it. Oddly enough, this doesn’t seem to be the bright, wavy red flag it should be. He’s so scared of death that he’s willing to discontinue being human for a small taste of an extended life.
Why Are People Convinced?
So, if my statements are true, then why is the book so convincing and the reviews universally positive? It’s not because people are stupid, but something far more simple. The book doesn’t continually and all at once slap you in the face with his flatulence. The pungent aroma is layered between positive messages (a utopia, immortality, etc.), topics that have nothing to do with this title, and historical examples of technological progress. This layering is a mental sleight of hand that has a reinforcing effect. Let me give you an example.
Imagine I wrote a book claiming that within ten years, humans would be exploring and populating the cosmos outside of our solar system. Rather than go into the specifics of my claim and address the real risks and challenges, I spend most of the pages talking about other things. I discuss at length the history of NASA and the challenges conquered to put humans on the moon, all in the span of a decade. I talk about the potential of solar sails and other propulsion technologies. I even go off on a tangent imagining the impact on humanity of having a working Dyson sphere. Kurzweil employed the same distraction techniques instead of making points or providing supporting evidence in his book.
The book itself has little to do with its title. He dedicates only a small portion of the text to this topic. He really wants you to know that, based on his vibes, utopia, and immortality are just a few years away. Kurzweil claims we’ll have a utopia in the next 20 years. It’s an easy sell since many people reading his book will still be alive. The entire book consists of telling people what they want to hear. He spends no time talking about challenges or issues. He knows that you sell a lot more books telling people what they want to hear rather than confronting hard truths. This sums up so much of our current age.
He knows that you sell a lot more books telling people what they want to hear rather than confronting hard truths.
That said, the book is an informative glimpse into the mindset of a certain type of person. These would be people with the transhumanist, posthumanist, or e/acc mindset. So much of the transhumanist argument is framed around making us better humans, but it’s really about making us into machines. I’m sure Kurzweil believes he describes a utopia. But like so many utopias, it’s just a thin layer of cheap wallpaper over a dystopia.
So much of the transhumanist argument is framed around making us better humans, but it’s really about making us into machines.
Disconnected From Reality
Kurzweil gives some of the most absurd examples in this book, proving that he has no idea how the world works and is disconnected from reality altogether. For example, after connecting our brains to the cloud, he imagines entertainment where we don’t merely watch a movie but feel the actor’s complex and disorganized emotions. Uhm… Can someone please tell him that actors are… well… acting? He doesn’t seem to realize that people acting in movies are expressing emotions, not feeling emotions. When we insist a tortured character in a movie needs to actually be tortured for entertainment, whose utopia are we living in?
Virtual Experiences
The book has an obsession with virtual experiences. He imagines scenarios such as a virtual beach vacation for your family, where you have the sights and smells of an actual beach. Nothing like taking a vacation with your family while, in reality, not taking a vacation. It reminds me of the company Rekal from the Philip K. Dick short story We Can Remember It For You Wholesale, which became the movie Total Recall for those who never read the story. I don’t know what it is with these people who seem to look at a dystopian SciFi and say, “Yes, that’s the technology we need.” These are cheap illusions that don’t have the impact of the real thing, but not to Kurzweil.
He claims simulations will be so good that there will be no point in doing the real thing and uses the example of climbing Mount Everest, which demonstrates he doesn’t understand what stakes are or what the point of doing something challenging is in the first place. In many cases, the point of performing the activity is the friction and difficulty. We just had the Olympics. Imagine telling Simone Biles, “Why put all of that hard work into competition? Soon, you’ll be able to ‘experience’ Olympic competition.” What Kurzweil doesn’t understand is that when experiences become easy, they lose their value.
When experiences become easy, they lose their value.
Confronted With His Aging
In many ways Kurzweil is keenly aware of his aging. This is obvious in his obsession with simple technology like replicants, which are merely trained on your writings. He discusses the replicant he made of his deceased father and rambles on about how fooled he was by his creation. This experiment was supposed to demonstrate the impressive capabilities of today’s technology, but it ended up just being sad.
However, what’s the point of having a chatbot trained on your writings and other material that exists after you pass away? It’s not you. It doesn’t have your identity or your true thoughts, nor does it encapsulate the complexities that make up your true identity. Even if you could create a more exact replicant, what’s the point? It still won’t be you. It could be a perfect copy of you, but it isn’t you. This is the kind of thing a narcissist would want. I don’t want a copy of myself running around, and I’m sure the world thanks me for that.
When you think more deeply about them, replicants have another problem. They are a get-out-of-jail-free card for not doing the right thing. Why spend time with your loved ones when they are alive if you can create a cheap copy to chat with at your convenience after they are gone? More time doing what you want and less time spending with the ones you love.
Things Will Cost Nothing
Not only will things be better in the future, goods will basically cost nothing. Kurzweil says that everything will become information technology, and the cost will go to zero or nearly zero, even basic necessities like food and clothing. He uses this transformation to say people won’t fight over resources anymore and uses a silly example, such as people fighting over a PDF. The whole premise is absurd. Vertical farming won’t drive food costs to zero, and people will fight over information. People get into fights over social media posts all the time.
Speaking of social media, he makes more ridiculous claims about social media and the cost/value tradeoffs. For example, he says it costs companies like Facebook, Google, and TikTok nothing after they’ve built their infrastructure, suspiciously omitting the energy costs and maintenance to run the infrastructure and the veritable army of people these organizations employ. He justifies his claim by stating that there’s no difference in cost between connecting you to a hundred people or a thousand people, as though the connection between people is where the cost is, but that’s not the stupidest part.
He says that if you could make $20 mowing a lawn but choose to spend that time on TikTok instead, then TikTok is worth $20 to you. This is asinine. Not every action you take in life is in service to make money, and not every free moment is a lost opportunity, either. So, in Kurzweil’s logic, if you could make $5 on Fiverr by designing a logo for someone but decide to sleep instead, then sleep is worth $5. You could make that $5 the next morning with no money lost. None of this even considers algorithms, the addictive nature of social media, and humans just wasting time.
Another spit-take moment is his discussion of radical life extension technology, which he states will not be available solely to the wealthy but also to the less fortunate worldwide. To prove this point, he uses the mobile phone as an example. Nope, you read that right.
Kurzweil says that since most people on the planet have a mobile phone, radical life extension technology will be available to them in much the same way due to extremely low cost. However, I think the mobile phone analogy is worth a deeper look. There’s a big difference between the iPhone in my pocket and an adware-riddled cheap cell phone subsidized by some company squeezing every drop of data from a user that it can. Tack onto this subsidized connectivity like Facebook’s Free Basics program meant to provide free internet to users in developing countries, which ultimately traps them in a Facebook hellscape, and you have the blueprint for something fairly dystopian.
Continuing his cost-nothing crusade, Kurzweil states that using robotics, cheap energy, and automation to replace labor outright in the 2030s would make it relatively inexpensive to live at a level considered luxurious. Telling people things will be cheaper, but you won’t be able to afford them because you don’t have a job is a contradiction that apparently didn’t dawn on him when he wrote that passage.
And… I’m not even going to get into his Bitcoin comments.
Jobs and Wages
Kurzweil has odd claims about jobs and wages. For example, he claims that more jobs will be created than lost, but he can’t answer what those jobs will be because they haven’t been invented yet. He uses examples like farming and the textile industry to prove his point. But this doesn’t make sense since AI is a far more generalized technology than a tractor or the power loom and can cross many different industries.
On wage stagnation, he boasts about how stagnated wages can buy more compute. Imagine that conversation with your family when having to skip a meal because you can’t afford food. “I know you are hungry, kids, but just think about how much more compute we have!”
I know you are hungry, kids, but just think about how much more compute we have!
One of Kurzweil’s favorite scare tactics is claiming there won’t be jobs for unenhanced humans and stating that until we fully merge with AI, there will be almost no jobs left. He makes multiple claims throughout the book on this point, saying biological brains cannot keep up with non-biological precision nanoengineering. Whatever the f—k that word salad means. This is another one of Kurzweil’s tactics on display. He knows most people know nothing about nanoengineering, so he bloviates on the topic. For good measure, he also mentions a world where we watch political ads or share personal data to get free nano-manufactured products. Ah, yes. The utopia we were all hoping for.
When it comes to automation replacing and disrupting the job market, he brings up a silver lining. The gig economy. He mentions the gig economy offers people more flexibility, autonomy, and leisure time. Kurzweil is so out of touch he doesn’t realize these aren’t the same thing. Once again, imagine that conversation. Telling someone who delivers for DoorDash, “Sure, you don’t have a regular job that pays well enough or has benefits, but isn’t all that leisure time great?” When you can’t pay your bills, downtime isn’t leisure time.
When you can’t pay your bills, downtime isn’t leisure time.
Being Human
In one part of the book, he questions what being human even means when introducing non-biological components and brain-computer interfaces. This is actually a great question, which, of course, Kurzweil doesn’t answer. Instead of answering, he vomits more of his pontification about inevitability, saying the non-biological component will grow exponentially while our biological intelligence will stay the same, providing a more specific prediction that in the 2030’s our thinking itself will be largely non-biological. Kurzweil has a way of stating questions as though he’ll answer them but never answering them. This is how a con artist operates, appearing to be upfront.
It should be obvious to anyone reading the book that Kurzweil really doesn’t like being human and yearns for the day to transform into something else. It doesn’t even matter to him what he becomes as long as it isn’t human.
For example, it’s uncomfortable (but necessary) to think about how replacing our biological components with synthetic ones may change us, especially when it’s not for the better. Instead of addressing this complicated reality, he makes the point that we remain the same person despite our cells going through a replacement process and our brains being almost completely replaced over the span of a few months. The implication he hopes you draw is that this non-biological replacement shouldn’t bother us. Once again, more absurdity.
Bodily regenerative processes are not the same as a wholesale replacement by synthetic alternatives. This holds true for both physical and cognitive functions. This irritates me to no end, and it’s one of the most obvious flaws in his logic. Kurzweil hopes to smother us with a pillow while he whispers, “Just let the singularity happen.”
No Downsides
One of the most apparent aspects to readers of the book is Kurzweil’s failure to mention nearly any negative aspects or potential adverse outcomes in his book. Either he’s oblivious to them or feels that adverse outcomes don’t align with his message. My guess is it’s a mixture of both.
I’ve discussed many of these downsides already, but one in particular is his presentation of simulation and self-driving cars as though they’re magic. To support this, he mentions the success of companies like Waymo. There is never a mention of Waymo’s issues, such as how these cars have been found driving down the wrong side of the road or mysteriously honking their horns. We don’t have capable Level 5 self-driving cars on the road today, and this problem is not solved. Every company working on self-driving features, from Waymo to Tesla, has issues they cannot solve today.
These are undoubtedly solvable issues, and we will have full driverless technology in the future, possibly even in the near future, but today, these companies can’t solve the problems. It’s undoubtedly disingenuous to talk about driverless cars as though they are a solved problem today.
Okay Not Knowing
Another of Kurzweil’s comfortabilities is agreeableness to not knowing how AI works or comes to its conclusions. He mentions that we may not know or understand even if explanations were provided. It’s odd that he mentions this while talking about the judicial system, an area that’s been plagued with algorithmic issues. Even outside of the judicial system and policing, there have been so many instances where algorithms have unfairly discriminated against people, denying them benefits and even entry into schools. Recently, it was announced that Nevada will use Google’s AI to determine whether people get benefits. People have a right to know why they were denied benefits, and it can’t be, “because the algorithm says so.”
Imagine an air traffic control AI that instructs pilots to fly figure 8’s around the airport before landing. Will we question this or receive it as some sort of hidden knowledge that the AI system has that we can’t fathom? This would be an obvious example that the system has an issue, but countless hidden issues wouldn’t surface in the same way. When we don’t understand how a system came to its conclusions, we set ourselves up for confounders to run rampant.
As I read the section on the judicial system, I wondered how you would ever get a fair trial by jury in the future. When everyone is permanently connected and has access to data that biases them, it may be possible for anyone to get away with a crime purely by spending enough money to taint the data. Or will you be forced to install the JuryBlocker software directly into your cognitive processes? I’m sure Kurzweil would think this thought exercise is silly because the goal is to remove humans from the judicial process altogether, but as we know, we don’t live in a perfect world. Our technology is rarely that good, and humans have a habit of not making the right decisions.
Not The Whole Story
There were so many parts of the book where Kurzweil would bring something up, and I’d be left with the thought, “That’s not the whole story.”
For example, he references things like ChatGPT passing the Bar Exam or AlphaGo beating the best Go player in the world but never tells the whole story. For example, when ChatGPT passed the Bar exam, it also passed other similar standardized tests. Researchers reworded the questions to ask the same question differently, and ChatGPT failed, proving that it had memorized data in its training data. Kurzweil wants you to believe that because of this, a lawyer’s days as a profession are numbered, but his exercise misses the more significant point that lawyers don’t sit around answering Bar exam questions all day.
AlphaGo was a truly amazing accomplishment, but Kurzweil leaves out that even average Go players can beat superhuman Go AIs these days. They can exploit these systems through adversarial policy attacks. These attacks are highly concerning if the technology is deployed in high-risk scenarios outside the game of Go.
In his discussion about disruption from AI, he claims that sometimes there aren’t any losers. For example, a revenue stream from treating a particular illness. He says there are many areas of technological change where losers don’t exist and gives the example of creating a cure for a disease. In this scenario, companies and individuals lose a long-term stream of revenue. This is another one of those sleight-of-hand things Kurzweil does. The cure is indeed more beneficial to society, but that’s certainly not how things play out in practice as large pharmaceutical companies hang on to revenue streams. No matter how cloud-connected your brain is, you won’t be able to compete with large organizations and a mobilized workforce. There may be occasional exceptions, but it’s hardly the rule.
Conclusion
This lengthy post didn’t scratch the surface of the nonsense hawked by Ray Kurzweil in his book. There are so many points I take issue with, most specifically the arguments of inevitability and the aggressive timelines he’s attached. Given all of his bullshit, you might be surprised that prominent people continue to hold him in high regard, but I’m not. Kurzweil’s tech spirituality aligns with their larger goals.
We need to ask more serious questions of people trying to sell us things, even those selling us ideas, because these things have consequences.
Look For Yourself
Before writing this article, I didn’t look for other articles or takedowns of Ray Kurzweil. I didn’t want these pieces to taint my impressions of the claims made in the book. The only exception was the Newsweek article from 2009, which I stumbled upon while looking up a specific piece of information about him. After writing the article, I was curious about what others had to say, and I promised myself I wouldn’t go back and reword anything in this article based on what I had read.
If you are still unconvinced, I’ve highlighted a few articles you can read for yourself below. Some of these are older articles, proving that nothing has changed. These are worth the read.
How Ray Kurzweil Sells His Junk Science – Geoffrey James – June 17, 2010 This is an old article, but it’s worth the read for the rules of selling junk science.
Ray Kurzweil Does Not Understand the Brain – PZ Myers – August 17, 2010 This is PZ Myers smashing Kurzweil for making the claim that by 2020, we’ll have reversed engineered the human brain. Obviously, that didn’t happen.
The singularity is not near: The intellectual fraud of the “Singularitarians” – Corey Pein – May 13, 2018. This article has an amazing quote. “Science begins with doubt. Everything else is sales.” This is something we should all keep in mind as we blindly take as fact the drivel of the AI Hype Bros. For some more notable bangers by Corey Pein, check out Cyborg Soothsayers of the High-Tech Hogwash Emporia. “Ray Kurzweil’s Singularity is an overheated white paper by a zealot for the American dream of luxury and convenience.” There were a whole lot of references to this type of thing in the book.
Anyone digging even mildly beneath the surface will see that Ray Kurzweil is a charlatan and a huckster. He’s not someone to be taken seriously. Despite this, many tech people will continue to genuflect for Kurzweil because he says what they want to hear. I also mentioned in a previous post that in our short attention span existence, we reward people for being bold, not for being accurate. Something that Kurzweil happily exploits. Welcome to the age of post-reality.
Humans are social creatures, and friendship and love are relationships that run deep in our history, predating Homo sapiens as a species. We associate these relationships as core features of our humanity, but companies are attempting to change this. Every time a new technology comes along, people try to use it to solve complex social issues that have nothing to do with technology, and with AI, it’s happening again. Would you have a chatbot friend? Would you marry a chatbot? There are companies developing products that hope you will. Welcome to the attempted dehumanization of friendship and love.
Solving Non-Problems
There are few things that I can say for sure, but I will say with certainty that the world won’t be a better place when both friendship and love are simulated, and we treat apps like humans and humans like apps.
The world won’t be a better place when both friendship and love are simulated, and we treat apps like humans and humans like apps.
When we take a step back, one thing that should be obvious in the current generative AI craze is that solving non-problems is far easier than solving real problems. This makes sense. There’s a low cost of failure in addressing non-problems. Hell, you don’t even need to _solve_ non-problems to be successful. Let’s think about it: it’s not like the world has a shortage of writers, artists, and musicians. However, those specific non-problems are a topic for another day.
Speaking of solving non-problems, rather than using generative AI capabilities for well-suited tasks, we’ve witnessed an abundance of what I call shitty AI gadgets. What makes them “shitty” is the fact that they don’t actually solve a problem. The focus for many is how “cool” the technology is without emphasis on whether it solves a problem or does anything at all.
This joke by @plibin on Twitter sums up what every single one of these gadgets looks like to me.
Shove generative AI into every technological crevice possible and hope that money sprouts. These products are only good for setting fire to VC money.
When AI is Your Friend, You’ve Got No Friends
The latest shitty AI gadget is called Friend. No, not a joke. And apparently, they spent most of their raised money on their domain name.The Friend gadget also exhibits higher levels of cringe than other gadgets. Other gadgets at least pretend to do something useful. Friend is happy to do nothing at all.
A glance at their commercial is all that’s needed to address doubts about peak-level cringe.
If you want some faith restored in humanity, read the comments. The people writing the comments are human, and they get it—something that the Friend team doesn’t.
Watching the Friend commercial shows just how disconnected these people are from reality. If they are trying to shed conspiracy theories about how they are secretly unfeeling reptilian aliens, they are failing. I mean, what date is going to put up with this? Oh, what is that around your neck? Yeah… I’m sorry, I just realized I have something else to do.
Of course, all of these miss the larger point that someone invested in the Friend device wouldn’t be on a date in the first place, nor would they be out enjoying time with “real” friends.
In looking to optimize everything, including our personal lives, AI friends make sense. It can be all about us. We’ll never have to listen to them tell us about their problems or need to be a shoulder for them to cry on. We may even enter an era where many people don’t know what true friendship feels like.
However, it’s not just loneliness that would drive someone to AI friends or AI lovers. Part of the problem stems from people wanting sure things. There is no perceived risk, fear of rejection, or potential pain. A chatbot will not reject us or tell us things we don’t want to hear—well, unless we don’t pay the bill. This is a powerful pull that some will find attractive.
Isolating Effects
An AI friend or lover wouldn’t have us out living our best lives in the real world because they have an isolating effect. These gadgets provide users with a false sense of companionship and exacerbate the very issues they purport to solve. Rather than going out, we stay home. We stay home and play it safe rather than going on a date and taking a chance on love. If gadgets like Friend were to take off, this would be a net negative for health and wellbeing.
An AI friend or lover doesn’t care if we live or die. It doesn’t care if we are happy or sad. Subconsciously, even if we fool ourselves, we know this.
I’ve mentioned when AI is your friend, you’ve got no friends. I’m not just referring to the uncaring stochastic companion we haul around, but it’s the fact that it makes people not want to interact with us. This aspect further isolates us from the real world. I mean, which of our real-world friends would put up with this?
If I wore the Friend device to a get-together with my actual friends, they would launch a merciless onslaught of insults and fun at my expense, and that’s why they’re my friends. Real friends keep us honest. They don’t let us get full of ourselves, and they don’t just tell us everything you want to hear. This feedback helps us grow and have greater life satisfaction.
Nothing easy is satisfying or worth having. This applies to friendship and love as well. The modern world promises that we don’t need to delay gratification. There’s no sense of investment. Everything needs to be an instantaneous hit of dopamine. There are very few things in life where instant gratification is nearly as satisfying as a delayed gratification activity.
In a recent interview with Eugenia Kuyda, the CEO of Replika (An AI friend company), she said, “It’s okay if we end up marrying chatbots.”
Here’s her response to a very good question:
Question: “When we started out this conversation, you said Replika should be a complement to real life, and we’ve gotten all the way to, “It’s your wife.” That seems like it’s not a complement to your life if you have an AI spouse. Do you think it’s alright for people to get all the way to, “I’m married to a chatbot run by a private company on my phone?”
Kuyda: “I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving, you are less lonely, you are happier, you feel more connected to other people, then yes, it’s okay.”
Feel more connected to other people? Really? This is disconnected, disingenuous, or outright stupid. Sure, it could be simple disingenuousness. After all, her job is to hawk her company’s wares. But it should be obvious that being married to a chatbot won’t make us more connected to other people. This situation reminds me of a documentary I watched years ago about people in love with their RealDolls. They’d take them out for drives, sit down for dinner, and watch TV with them, just like another human. You know what the documentary didn’t show? Their friends!
We highlight this disconnect by examining something simple between real friends, like laughter. Is our LLM-powered friend going to make us laugh? I mean, a real guttural laugh that sticks with us? Or will it try to entertain us with a mindless video it thinks we’ll like, generating a momentary chuckle that gets lost in the din of distraction? This and many more cheap substitutions await us, beaten into submission, until we won’t remember the real thing.
More Cringe
With the Friend device, there’s a supreme disconnection from reality, but this isn’t the exception. This is becoming the rule. The Friend gadget is the most obvious incarnation of this, but this disconnection is everywhere in the AI space. This is on full display when we hear the AI tech crowd talking about creativity and creative arts. You can tell these people have never been creative in their life and understand nothing about art. Not even a little bit.
I can’t remember who said this, but someone commented about this situation, saying that the Silicon Valley crowd is just a bunch of people having fun with their friends. There’s some truth to this. It’s like a Silicon Valley garage band, but instead of music, it’s tech. So, it’s not about art or creativity at all. The point is to make “cool” tech, whether it solves a problem or not. It’s a familiar theme.
However, startups are not the only ones exhibiting this cringe factor and disconnection. Google’s new Gemini video mixes both cringe and dehumanization, all in the name of optimization.
There are so many things wrong with this commercial. All these tech types fail to realize that some things are supposed to have friction. Friction is how we grow and become better. Friction is how we challenge ourselves. Even things like second-guessing and self-reflection are a form of friction. We are optimizing all the wrong things, a topic I’ve covered twice before, in Optimizing Away Human Interactions With AI and Outsourcing Simulated Emotional Connections To Bots.
Now, do you think Sydney would rather get a letter from a little girl who struggled to put her words to paper, leaving every imperfection as evidence of her effort and caring, or from Gemini? Which scenario do you also think would be better for the little girl? The answer is so blatantly obvious, well, obvious to us humans, at least. (I’ll avoid making a second alien joke here.)
Technical Issues
So far, I’ve only discussed the human aspects of technology, but there’s a lot more when considering the technical risks. There’s far too much to cover, but I’ll highlight two. For more information, you can read my post introducing SPAR.
Privacy is one of the obvious issues. This is because all of that data collected and shared with our AI friend is valuable. If there’s one thing we’ve learned from recent history, it’s that data available is data exploited with all of our personal thoughts and interactions monetized and weaponized against us. Even if the startup creating the AI friend application claims to respect your privacy, when they get acquired (possibly specifically for this type of data) all bets are off.
At least the people in the documentary I watched years ago didn’t have to worry about their RealDoll harvesting sensitive data and snitching back to the company.
Perverse Alignment
Can we be sure that our AI friend is aligned with our best interests?
A perverse alignment is the alignment of a system to serve the best interest of the company or organization that created it over the user using the system. There is the potential to nudge and push users to do all sorts of things. This may be to buy products or spend more time on the platform. In the AI friend scenario, spending more time on the platform leads to less time with real friends.
It may be difficult to identify when a system is aligned like this. It’s not like our AI friend will respond, “You’ve been worried about car insurance. Do you know who has great car insurance? GEICO.” I made the same GIECO joke back in February 2023 about AI-powered search engines. I gotta get some new material.
Loneliness
I don’t mean any of this to discount the loneliness epidemic happening with younger people. This epidemic is something Jonathan Haidt covers at length and is infinitely more qualified to address than I am. I’ll give you a hint, though. Do you know what he doesn’t recommend? More technology.
This crisis is, at least in part, fueled by technology. There’s something perverse about layering even more technology to solve a human problem. An old saying about treating the symptoms instead of the cause applies here.
There’s a problem with a device that is basically a super-powered inspirational quotes machine, telling us everything we want to hear. We never get better, we never challenge ourselves, and we never encounter real satisfaction. We get stuck in a loneliness loop, with only momentary relief. It’s like if we had an excruciating headache every day, we wouldn’t put up with it, chewing ibuprofen like it was candy to gain temporary relief every day. We’d try to find the cause and address it. This situation is no different.
The AI Religion
Part of the problem is that AI has turned into a religion. I’ve joked about how these devices often resemble communion wafers, but I don’t believe the Catholic Church has had any influence on them. People have talked about AI in more religious contexts, attacking people without enough faith and elevating people they believe are prophets. AI has died, AI has risen, AI will come again.
Religions seldom involve questions, at least not questions that have answers, which is perfect for our current AI moment and aligns with hype. We have to take it on faith that things will get better, and the sermons from AI prophets aren’t merely an attempt to get more profit.
Read Ray Kurzweil’s new book The Singularity is Nearer for more religion-related disconnections from reality. I swear, I’ve pulled a muscle in my neck, shaking my head at all the misperceptions and misunderstandings contained within the book. But Kurzweil is a prophet in the church of AI, and what I’m saying now is blasphemous. If Kurzweil says something, it requires taking it on faith.
When we dig into it, people like Kurzweil, Chalmers, and Clark push a transhumanist vision for humanity that converts us into the Borg, stripping away our humanity and turning us into machines. Resistance will most likely be futile.
What happens when we evolve not to know or have true love and friendship? Will we be better or worse off? Evolving into a machine doesn’t sound appealing to me, but the transhumanist figureheads push the opposite perspective. Transhumanists push the perspective that merging with machines will make us superior humans, but it will most likely make us average machines. That’s not a good trade. I’ll expand upon this in a different post.
Transhumanists push the perspective that merging with machines will make us superior humans, but it will most likely make us average machines.
Don’t fret over my immortal digital soul. I’ve already prayed my five Hail Turings for the day.
Conclusion
As we navigate the sea of innovation porn, let’s not set our course away from humanity. Core features of our humanity make us unique on this planet, not our processing capabilities. We can have technology that works for us and maintains our humanity. Don’t believe those who tell you it’s a tradeoff. They are selling something.
Also, let’s use LLMs for what they are good for, not for friends or lovers. There are plenty of tasks for which you can apply LLMs to boost efficiency and actually solve problems. Do that. Friendship isn’t a technology problem. Neither is love.
If you are hopeful about the future and of technology but remain skeptical of BS claims and other nonsense, hang in there. More and more people are voicing their opinions, and it’s no longer a lonely hill to stand on.
You might wonder why AI companies are working on seemingly simple and unimportant advancements in AI when there are much more significant problems to solve. Why would companies trying to create AGI get sidetracked by focusing on potentially already-solved problems? A couple of examples are OpenAI’s voice cloning, Google’s VLOGGER, and Microsoft’s VASA-1.This research, for many, only seems to have use cases for fakes and frauds, but I believe this work signals something much deeper: that we could be near the peak of LLM capabilities. With AGI off the table, it is time to go deep and get very personal.
Peak LLM
Although you can do some cool things with LLMs, and we’ll no doubt see further applicability in other use cases, it’s a far cry from their touted value. You know what I’m talking about, the more impactful than the printing press crowd that still seems to swarm every conversation on the topic. These people talk about 10x, 100x, and even 1000x productivity boosts with LLMs. Compared to bold AGI claims and nonsense productivity levels, a 10% efficiency gain seems inconsequential.
The Wall Street Journal reported that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models last year but brought in only $3 billion in revenue. Ouch! There is reporting on the dismal outlook for generative AI, and some foresee a new Dotcom crash.
People have become more skeptical of claims (as they should), and it seems that many more people are noticing. You can’t believe the demos you see. Many are highly controlled or manufactured altogether. Even the SORA demo that everyone lost their minds over wasn’t what it purported to be.
LLMs are under-delivering on their overhyped promises.
I don’t know what to think about the economic angle. It’s not my area of expertise. I just now know that LLMs are under-delivering on their overhyped promises. Where leads economically, I don’t know.
Many LLMs, including open-source models like Llama 3, are catching up to GPT-4. Even if they don’t have the exact level of performance, they are close, which should tell us something. We may be hitting peak LLM capabilities. This means GPT-5 won’t be AGI or exponentially better than GPT-4. GPT-5 may be better than GPT-4 in some ways, but it is far from a groundbreaking explosion of capabilities.
This lack of performance isn’t going unnoticed at the companies building the technology either. This is why a new approach is needed by companies looking to monetize AI investments further. There’s about to be a shift away from a focus on AGI (although they’ll still talk about it) and ever more capable models to you. That’s right, you.
You’re Next
Just because we may be hitting peak LLM capabilities doesn’t mean things will stop. When you’ve reached the limit of going wide (general), you go deep (personal). This will be a sleight of hand shifting from purely training larger models on more data, creating more capabilities in a broad sense, to deeper, more personal integration.
These companies will make it all about you, not because you are the most important aspect, but because you are where the data is. With systems that are closer to you and more integrated with your data and activities, these companies are hoping to make the products more sticky, with the beneficial exhaust of having access to all your data.
The hope is that an epiphany will sprout from your screen as you find the same tools you previously could take or leave now indispensable. Or maybe even fool yourself with the tech, as the public launch of ChatGPT showed. ChatGPT became a social contagion not because people found it so indispensable but because we are bad at constructing tests and good at filling in the blanks.
But don’t take my word for it. Sam Altman has already started pivoting in this direction. Here’s what he says about the goal of AI: “A super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” That’s pretty creepy. But there’s more.
You can make the tech more sticky by allowing people to personalize and customize in more advanced ways. Technology like voice cloning and animating faces supports this customization aspect. When you can choose whoever you want to be your assistant’s AI avatar, you can anthropomorphize it more. How would you feel if a random stranger used your face and voice as their personal assistant? What about a family member? Is this creepier still? Oddly enough, it serves no purpose for the individual user. It doesn’t make the tool any smarter or more capable. It only exists to manipulate us or allow us to manipulate ourselves.
In the end, you’ll be blamed for LLMs’ lack of success by not allowing them to plunge deeply enough into your life. There’s a saying that if you don’t pay for something, then you are the product. Well, in the age of generative AI, you can pay for something and still be the product. The future’s so bright 😎
Even Deeper
AI companies are doing their best to make this technology unavoidable. We are getting AI whether we want it or not. It’s being baked into the very foundations of our computing systems, and even your humble mouse hasn’t escaped this integration.
How you deactivate these integrations will be anyone’s guess, as the flood of new integrations infects every application imaginable. A security check will be due soon, but security issues aren’t the only problem. As I’ve said, we are creating a brave new world of degraded performance. In an attempt to make hard things easier, we may make easy things hard.
Applications of narrow AI are cool and can be incredibly useful for certain tasks, but does it warrant hooking everything up to LLMs and hoping for the best? I don’t think so, and this approach is fairly misguided, opening us up to unnecessary risks.
Conclusion
We must be much more selective before blindly accepting deep data access and personal integration for these tools. This can start with a few relatively simple questions. What do we hope to gain from this access? How will this provide a measurable benefit? And, most importantly, are the trade-offs worth it? The answers to these questions will be different for everyone.
In many cases, it appears that for the small price of your soul, you can appear and sometimes feel marginally better in some aspects but be measurably worse in others. Does that sound like a good trade?
So, let’s talk about posthumanism for a moment. Yes, posthumanism is actually a thing, and it can sound like a rather odd movement to cheerlead. After all, we as humans aren’t done being human yet. Posthumanism’s adherents are anxiously awaiting the next stage of human evolution, homo technologicus. Yes, it’s also a real thing. I’ve also heard terms like techno-progressivism thrown around. As serious as some of these people may be, their concepts are surrounded by techno-utopian bullshit.
As amazingly silly as this sounds, their views aren’t far off from those of many people these days. Everyone from pure techno-utopians to level-headed “normal” people is kinda thinking the same thing. Let’s slap a bunch of tech inside our bodies and see what happens.
My goal with this post isn’t to address all the narratives or poke even more holes in the logic. I’m writing a book covering this and other topics. For this post, I want to point out a few glaringly obvious issues that should get more attention. The point of this post is that there is no free lunch regarding human augmentation.
Human Augmentation Must Be Universally Good, Right?
I never cease to be shocked at the casual nonchalance of people discussing slapping a bunch of tech inside their bodies, melding our brains with machines. I realize there’s a cool sci-fi aspect to it, but in real life, we have things called consequences. It’s different if there is a cognitive or motor impairment that the technology corrects for, another thing entirely when no impairment exists.
As a security researcher, I can’t bring myself to imagine these systems not being vulnerable to attack and, almost as bad, being used to manipulate us. We like to think of ourselves as the pillars of agency, but in reality, we can be nudged to do all sorts of things, resembling more automatons than humans.
This means that any of these systems would need to have a safe technical baseline. For a basic framework of a safe baseline, see the SPAR categories I’ve outlined previously.
I could address many other technical issues, but for the sake of this conversation, let’s call it a perfect technical implementation. A cognitive symbiosis of mind and machine without any technical issues or glitches. It is a completion of the techno-utopian dream.
Let’s look at why, even in a perfect implementation, there is still no free lunch.
Socrates
To look forward, let’s look back. This is Socrates. Totally not a fake photo, by the way.
Socrates has become a popular punching bag for the AI crowd. Apparently, dunking on a 5th-century BCE philosopher has become some sort of modern-day sick AI burn. So, what sin did Socrates commit that is so egregious to AI leaders today? He was against writing things down.
Socrates worried that writing things down would affect his memory, so he became a punching bag. However, what many don’t realize is that he wasn’t wrong. Writing things down can negatively affect your memory.
We can’t seem to imagine the past without viewing it through the lens of the present. People’s memories were far better in the past than they are today, even pre-social media and the attention apocalypse. It doesn’t take much thought to recognize this. In ancient times, when most people couldn’t read or write, the only place to store knowledge was in their heads. Even asking someone else, you were querying tribal knowledge stored in someone’s head. To his credit, Socrates stumbled onto cognitive offloading and recognized one of the effects.
Ultimately, we are better off for writing, and the benefit of writing things down far outweighs the benefits of a localized, tribal memory, even if individual personal memory is decreased. There are also other interesting effects of writing that Socrates missed, such as exploring thoughts and ideas and some of the memory-reinforcing effects. So, let’s forgive a 5th-century BCE philosopher their faults and focus on what he recognized for a moment: cognitive offloading.
Cognitive Offloading
Cognitive offloading is using physical action to alter the information processing requirements of a task to reduce cognitive demand. We all do this every day. If you’ve ever left yourself a note or set up a meeting in your calendar application, you’ve performed cognitive offloading.
This activity is beneficial since we only have so much cognitive capacity. It’s not just memory but decision-making skills as well. There’s a famous story about President Obama and why he only wore gray or blue suits. He was paring down his decisions.
I know it seems I’m making the posthumanist argument for them, but bear with me. Not all cognitive offloading is the same. In 2016, I heard the evolutionary biologist David Krakauer discussing cognitive artifacts on the Making Sense podcast. This was in the context of discussing complexity and stupidity. He referred to complimentary and competitive cognitive artifacts.
Without being too wordy, complementary cognitive artifacts help you create a model of the problem and are tools that rewire our brains to make problem-solving more efficient. These are things like maps, language, and even the abacus.
Competitive cognitive artifacts don’t augment our ability to reason but instead replace our ability to reason by competing with our own cognitive processes. Classic examples are the calculator or GPS navigation.
The interesting thing here is that complementary cognitive artifacts have imprinting and additional positive effects. For example, being proficient with maps increases spatial awareness. On the other hand, with competitive cognitive artifacts, you are probably worse off when the artifact is removed. For example, using GPS navigation systems degrades spatial awareness, so when it is removed, you are less capable than before.
I’m not arguing that we should destroy all calculators (or GPS navigation systems); I’m only pointing out the impacts of reduced cognitive function. It’s also interesting to consider that AI tools are almost universally competitive cognitive artifacts. We assume, wrongly, that there isn’t a cost to this augmentation. I mean, everything has tradeoffs in life. Technology is no different.
To avoid making this blog post a book a whole book, let’s look at memory.
Memory Storage
Most humans realize that memory is a limitation. Unless we are savants, there are only so many things we store in our heads. But we may be taking the offloading of memory too far. Let’s think about what we are actually doing. As humans, we are transitioning from knowing things to knowing where things are stored. We’ve treated this as universally beneficial without considering side effects.
We are transitioning from knowing things to knowing where things are stored
AI didn’t initiate this trend, but it has accelerated it, especially with systems like ChatGPT, which people use as oracles. This means the information we are retrieving may never have existed in biological memory in the first place and, more interestingly, may not be stored even after we retrieve it. Anyone who’s ever followed a YouTube tutorial on how to do something and, despite performing the task, had to review it again the next time can attest to this.
This brings up some interesting thought experiments. Is someone who doesn’t have any deep knowledge contained in their biological memory smart? After all, information on astrophysics is a search away. Would we say someone proficient at searching Google or prompting a language model is smart? Okay, let’s phrase the question a different way.
Is an average human + Google (or insert favorite AI tool here) smarter than Einstein or Von Neumann? After all, they have access to far more information far more quickly than either of those scientists ever did. Of course, the answer is no. We instinctively know there’s something more to knowledge and intelligence than merely knowing where data is stored or getting a summary from a document.
There’s no doubt that people may feel like Einstein, but that’s a topic for another day.
Human memory is getting worse, no doubt, due to technology. At the veterinary office I visit, I’ve seen people walk out of the exam room to use the restroom, go to the front desk, or go out to their car, and not remember which exam room they came out of. A clear degradation of spatial memory. These weren’t kids on TikTok or people staring down at their phones. People of all ages are represented.
But, not all memory tasks are straight lookup tasks, and memories spontaneously emerge. Sometimes, I bust out laughing when a memory pops into my head. This spontaneous surfacing has benefits, such as the creation of epiphanies and novel concepts creating a satisfaction that can’t be replicated with technology. What happens when this spontaneity disappears? Not only are we worse off, but it leads to more questions.
How do we develop novel ideas and concepts if we don’t have the right knowledge in our biological memory? It’s one thing to have knowledge and some novel concepts in memory and then explore external storage locations for further data. It’s another thing entirely to have no deep knowledge contained in biological memory and expect novelty to emerge because of access to external storage. I know the techno-utopians would say that we’ll build algorithms for this, but it’s a challenging problem and not the same thing and wouldn’t lead to the same results.
Humans + AI = Superhumans?
Human augmentation with AI is being sold as an intellectual get-rich-quick scheme, but the reality is gaining knowledge is hard. Sometimes, it is very hard, and there aren’t any shortcuts today, no matter how many prompts we create or documents we summarize. However, cognitive illusions are easy to come by. We end up fooling ourselves into thinking we know more than we do. Once again, AI didn’t start this trend. It’s merely the accelerant.
There’s a fundamental illusion clouding many people’s perceptions. Just as we can’t seem to view the past without the lens of the present, we can’t envision the future without using the same lens. We tend to assume we’ll keep our same faculties and gain more capabilities, resulting in some sort of win-win situation.
We mistakenly think human augmentation makes us superhuman, but in reality, it probably doesn’t. Despite knowing where information is stored and being able to perform some additional computational tasks, which may give us some superhuman capabilities in a few narrow areas, the reality is it may not make us superhuman overall and probably makes us worse. These additional capabilities will create very real and expanded blind spots and deficiencies. Of course, these won’t be identified until far too late, and everyone will claim not to have seen them coming.
These additional capabilities will create very real and expanded blind spots and deficiencies.
We haven’t even asked ourselves what we hope to get from this symbiosis or augmentation. There is just this generic sense of “enhancement,” but nothing overly specific. It’s one thing if the augmentation addresses some deficiency, such as reduced cognitive or motor function, but what are we addressing when a perfectly functioning human decides to augment themselves?
The reality is that when this symbiosis happens, we will become completely dependent on technology for far more than complex tasks; we will also be dependent upon it to function in our daily lives, even for simple tasks. This is because we will use the resources to offload even more cognitively, regardless of task complexity. Who wins in this scenario? Tech companies? Society? Us? At this point, will the technology still be working for us, or will we be working for the technology? More importantly, at what point do we stop being recognizable as humans?
Parting Thought
I’m not opposed to human augmentation or even being augmented in some way myself. But as an adult who has lived on planet Earth for a bit, I want to understand the tradeoffs. Understanding the costs is essential to determining whether the augmentation is worth it. It seems that in some cases, we may be stiffed with a hefty bill that we never would have agreed to ahead of time.
When it comes to being human, there are certain things we’d like to protect and certain things we are fine giving up. This will be different for each individual, but we all have this. These considerations will have to be part of our future decisions.
Our brains seek to free up resources and limit the amount of work they perform to create brain capacity for other tasks. In short, our brains seek to offload as much as possible. This is something we don’t consciously realize. It’s one of the reasons we prefer getting an answer to solving a problem. Our brains seek the offloading path, whether it’s helpful or not. This evolutionary quirk may have served us well in the past, but with technological advances, it may not serve us well in the future.
The movie Idiocracy is a cult classic that has been quoted more and more over the past few years. Here’s something to think about. It could be that Mike Judge got the future outcome of the movie’s setting right but just got the premise wrong. The only way the world of idiocracy could have come about is if highly capable AI had been in the background, making everything work and, of course, manufacturing Brawndo. Brawndo has electrolytes!
This may seem like an odd book recommendation for 2023. After all, the book is 74 years old. Maybe you, like myself, read it when you were in school and felt that you’d gained all the insights from reading and classroom discussions. Do you remember any of those? I know I didn’t.
Revisiting a text like 1984 with the benefit of years and new context can lead to surprising insights. For example, did you notice the device called a Versificator? It’s a generative AI (of sorts) and its purpose was to crank out creative content, such as literature and music, without needing to expend creative thought. I’ll leave you to ponder the parallels with our modern boom in creative, generative AI (Dall-E, ChatGPT, etc.)
However, if you ask ChatGPT about its role in the story, it thinks it’s much bigger. Thanks to @CoryKennedy on Twitter for the image and the laughs.
What Made Me Revisit 1984 in 2022?
Believe it or not, it wasn’t misinformation, disinformation, or even surveillance discussions. It was something far less intelligent.
A while back, a person I was conversing with made some outlandish claims contrary to proven scientific facts. They insisted people shouldn’t be able to claim otherwise. Instead of directly challenging the person, I stated, “Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.”
The person gave me a puzzled look. Very proud of myself for remembering the quote, I smiled and said, “It’s from 1984.”
They responded, “I don’t care what year it’s from. That’s stupid.”
That exchange made me realize a few things. It’s been over 30 years since I’d read the book. I don’t remember the time when I’d read it. I was too young and cared too little. The quote I so proudly produced wasn’t from my reading but from others’ usage. I made a commitment to re-read it again in 2022.
Context
Put the reading in the context of the technological present. There’s a lot of referring to “the party” in the book, but just replace that with any other current group (tribes, in-groups, out-groups, conspiracists, etc.) The suspicion of other in-group members is like attacking your “near enemies.” For example, It’s easier for a group of conspiracy theorists to attack an in-group member who may agree that Bill Gates is microchipping people but not believe the earth is flat versus an out-group member who is rational and doesn’t care what conspiracy theorists think.
“The horrible thing about the Two Minutes Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in.”
George Orwell – 1984
Does that quote remind you of something? Concepts like the Two Minutes Hate and Atrocity Pamphlets make sense in the context of modern algorithmic social networks optimizing for increased engagement.
The big conversation of the book always seems to be the surveillance and disinformation aspects. These concepts are certainly relevant today, but not from any one place. Orwell didn’t envision surveillance capitalism on top of other surveillance activities. Also, everyone is more than happy to share their exact location at will, which would have been terrifying to Orwell, but for all of us, seems to be the norm.
There are many other relevant aspects from the book applicable to current times. Denial of Science and reality, contradictory actions such as Doublethink, controlling language, and even re-writing or reframing history to fit changing narratives.
Orwell was on to the fact that people act differently when they know they are being observed. The same is true on social networks. People are more likely to share misinformation that aligns with their biases when they know others will see it.
I enjoyed my rediscovery. It made me think about its applicability in our algorithmically driven, tribal, and divided times, even though it was written in 1949. It also made me think of other texts I may have overlooked, such as Jules Verne’s Paris in the Twentieth Century. I normally don’t pre-plan my reading, but I may need to add consider reading this in 2023.
With that, I’ll leave you with a few of my favorite quotes from the book.
A Few of my Favorite 1984 Quotes
“The horrible thing about the Two Minutes Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in.”
“In our world there will be no emotions except fear, rage, triumph, and self-abasement.”
“The Revolution will be complete when the language is perfect.”
“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten.”
“Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.”
“The children, on the other hand, were systematically turned against their parents and taught to spy on them and report their deviations. The family had become in effect an extension of the Thought Police.”
“In Newspeak there is no word for “Science.” The empirical method of thought, on which all the scientific achievements of the past were founded, is opposed to the most fundamental principles of Ingsoc.”
“A Party member is expected to have no private emotions and no respites from enthusiasm. He is supposed to live in a continuous frenzy of hatred of foreign enemies and internal traitors, triumph over victories, and self-abasement before the power and wisdom of the Party.”
“Who controls the past controls the future; who controls the present controls the past.”
“And if the facts say otherwise, then the facts must be altered. Thus history is continuously rewritten. This day-to-day falsification of the past, carried out by the Ministry of Truth, is as necessary to the stability of the regime as the work of repression and espionage carried out by the Ministry of Love.”
“Crimestop, in short, means protective stupidity.”
“Doublethink means the power of holding two contradictory beliefs in one’s mind simultaneously, and accepting both of them.”
“One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish the dictatorship.”