AI security is a hot topic in the world of cybersecurity. If you don’t believe me, a brief glance at LinkedIn uncovers that everyone is an AI security expert now. This is why we end up with overly complex and sometimes nonsensical recommendations regarding the topic. But in the bustling market of thought leadership and job updates, we’ve seemed to have lost the plot. In most cases, it’s not AI security at all, but something else.
Misnomer of AI Security: It’s Security From AI
I recently delivered the keynote at the c0c0n cybersecurity and hacking conference in India. It was truly an amazing experience. One of my takeaways was encouraging a shift in perspective on the term “AI Security,” highlighting how we often approach this topic from the wrong angle.
The term “AI Security” has become a misnomer in the age of generative AI. In most cases, we really mean securing the application or use case from the effects of adding AI. This makes sense because adding AI to a previously robust application makes it vulnerable.
In most cases, we really mean securing the application or use case from the effects of adding AI.
For most AI-powered applications, the AI component isn’t the end target, but a manipulation or entry point. This is especially true for things like agents. An attacker manipulates the AI component to achieve a goal, such as accessing sensitive data or triggering unintended outcomes. Consider this like social engineering a human as part of an attack. The human isn’t the end goal for the attacker. The goal is to get the human to act on the attacker’s behalf. Thinking this way transforms the AI feature into an actor in the environment rather than a traditional software component.
There are certainly exceptions, such as with products like ChatGPT, where guardrails prevent the return of certain types of content that an attacker may want to access. An attacker may seek to bypass these guardrails to return that content, making the model implementation itself the target. Alternatively, in another scenario, an attacker may want to poison the model to affect its outcomes or other applications that implement the poisoned model. Conditions like these exist, but are dwarfed in scale by the security from AI scenarios.
Once we start thinking this way, it makes a lot of sense. We shift to the mindset of protecting the application rather than focusing on the AI component.
AI Increases Attack Surface
Another thing to consider is that adding AI to an application increases the attack surface. Increase in attack surface manifests in two ways: first, functionally through the inclusion of the AI component itself. The AI component creates a manipulation and potential access point that an attacker can utilize to gain further access or create downstream negative impacts.
Second, current trendy AI approaches encourage poor security practices. Consider practices like combining data, such as integrating sensitive, non-sensitive, internal, and external data to create context for generative AI. This creates a new high-value target and is a poor practice that we’ve known from decades of information security guidance.
Also, we have trends where developers take user input, request code at runtime, and slap it into something like a Python exec(). This not only creates conditions ripe for remote code execution but also a trend where developers don’t know what code will execute at runtime.
Vulnerabilities caused by applying AI to applications don’t care whether we are an attacker or a defender. They affect applications equally. This runs from the AI-powered travel agent to our new fancy AI-powered SOC. Diamonds are forever, and AI vulns are for everyone.
It’s Simpler Than It Seems
Here’s a secret. In the real world, most AI security is just application and product security. AI models and functionality do nothing on their own. They must be put in an application and utilized in a use case, where risks materialize. It’s not like AI came along and suddenly made things like access control and isolation irrelevant. Instead, controls like these became more important than ever, providing critical control over unintended consequences. Oddly enough, we seem to relearn this lesson with every new emerging technology.
In the real world, most AI security is just application and product security.
The downside is that without these programs in place, organizations will accelerate vulnerabilities into production. Not only will they increase their vulnerabilities, but they’ll be less able to address them properly when vulnerabilities are identified. Trust me, this isn’t the increase in velocity we’re looking for.
I’ve been disappointed at much of the AI security guidance, which seems to disregard things like risk and likelihood of attack in favor of overly complex steps and unrealistic guidance. We security professionals aren’t doing ourselves any favors with this stuff. We should be working to simplify, but instead, we are making things more complex.
It can seem counterintuitive to assume that something a developer purposefully implements into an application is a threat, but that’s exactly what we need to do. When designing applications, we need to consider the AI components as potential malicious actors or, at the very least, error-prone actors. Thinking this way shifts the perspective for defending applications towards architectural controls and mitigations rather than relying on detecting and preventing specific attacks. So much focus right now is on detection and prevention of prompt injection, and it isn’t getting us anywhere, and apps are still getting owned.
I’m not saying detection and prevention don’t play a role in the security strategy. I’m saying they shouldn’t be relied upon. We make different design choices when we assume our application can be compromised or can malfunction. There are also conversations about whether security vulnerabilities in AI applications are features or bugs, allowing them to persist in systems. While the battle rages on, applications remain vulnerable. We need to protect ourselves.
There is no silver bullet, and even doing the right things sometimes isn’t enough to avoid negative impacts. But if we want to deploy generative AI-based applications as securely as possible, then we must defend them as though they can be exploited. We can dance like nobody is watching, but people will discover our vulnerabilities. Defend accordingly.
The past couple of years have been fueled entirely by vibes. Awash with nonsensical predictions and messianic claims that AI has come to deliver us from our tortured existence. Starting shortly after the launch of ChatGPT, internet prophets have claimed that we are merely six months away from major impacts and accompanying unemployment. GPT-5 was going to be AGI, all jobs would be lost, and nothing for humans to do except sit around and post slop to social media. This nonsense litters the digital landscape, and instead of shaming the litterers, we migrate to a new spot with complete amnesia and let the littering continue.
Pushing back against the hype has been a lonely position for the past few years. Thankfully, it’s not so lonely anymore, as people build resilience to AI hype and bullshit. Still, the damage is already done in many cases, and hypesters continue to hype. It’s also not uncommon for people to be consumed by sunk costs or oblivious to simple solutions. So, the dumpster fire rodeo continues.
Security and Generative AI Excitement
Anyone in the security game for a while knows the old business vs security battle. When security risks conflict with a company’s revenue-generating (or about to be revenue-generating) products, security will almost always lose. Companies will deploy products even with existing security issues if they feel the benefits (like profits) outweigh the risks. Fair enough, this is known to us, but there’s something new now.
What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve. This is new because it involves all risk with potentially no reward. These companies are hoping that users define a use case for them, creating solutions in search of problems.
What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve.
I’m not referring to the usage of tools like ChatGPT, Claude, or any of the countless other chatbot services here. What I’m referring to is the deep integration of these tools into critical components of the operating system, web browser, or cloud environments. I’m thinking of tools like Microsoft’s Recall, OpenAI’s Operator, Claude Computer Use, Perplexity’s Comet browser, and a host of other similar tools. Of course, this also extends to critical components in software that companies develop and deploy.
At this point, you may be wondering why companies choose to expose themselves and their users to so much risk. The answer is quite simple, because they can. Ultimately, these tools are burnouts for investors. These tools don’t need to solve any specific problem, and their deep integration is used to demonstrate “progress” to investors.
I’ve written before about the point when the capabilities of a technology can’t go wide, it goes deep. Well, this is about as deep as it gets. These tools expose an unprecedented attack surface and often violate security models that are designed to keep systems and users safe. I know what you are thinking, what do you mean, these tools don’t have a use case? You can use them for… and also ah…
The Vacation Agent???
The killer use case that’s been proposed for these systems and parroted over and over is the vacation agent. A use case that could only be devised by an alien from a faraway planet who doesn’t understand the concept of what a vacation is. As the concept goes, these agents will learn about you from your activity and preferences. When it’s time to take a vacation, the agent will automatically find locations you might like, activities you may enjoy, suitable transportation, and appropriate days, and shop for the best deals. Based on this information, it automatically books this vacation for you. Who wouldn’t want that? Well, other than absolutely everyone.
What this alien species misses is the obvious fact that researching locations and activities is part of the fun of a vacation! Vacations are a precious resource for most people, and planning activities is part of the fun of looking forward to a vacation. Even the non-vacation aspect of searching for the cheapest flight is far from a tedious activity, thanks to the numerous online tools dedicated to this task. Most people don’t want to one-shot a vacation when the activity removes value, and the potential for issues increases drastically.
But, I Needed NFTs Too
Despite this lack of obvious use cases, people continue to tell me that I need these deeply integrated tools connected to all my stuff and that they are essential to my future. Well, people also told me I needed NFTs, too. I was told NFTs were the future of art, and I’d better get on board or be left behind, living in the past, enjoying physical art like a loser. But NFTs were never about art, or even value. They were a form of in-group signaling. When I asked NFT collectors what value they got from them, they clearly stated it wasn’t about art. They’d tell me how they used their NFT ownership as an invitation to private parties at conferences and such. So, fair enough, there was some utility there.
In the end, NFTs are safer than AI because they don’t really do anything other than make us look stupid. Generative AI deployed deeply throughout our systems can expose us to far more than ridicule, opening us up to attack, severe privacy violations, and a host of other compromises.
In a way, this public expression of look at me, I use AI for everything has become a new form of in-group signaling, but I don’t think this is the flex they think it is. In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.
In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.
Advice Over Reality
Social media is awash with countless people who continue to dispense advice, telling others that if you don’t deploy wonky, error-prone, and highly manipulable software deeply throughout your business, then they are going to be left behind. Strange advice since the reality is that most organizations aren’t reaping benefits from generative AI.
Here’s something to consider. Many of the people doling out this advice haven’t actually done the thing they are talking about or have any particular insight into the trend or problems to be solved. But it doesn’t end with business advice. This trend also extends to AI standards and recommendations, which are often developed at least in part by individuals with little or no experience in the topic. This results in overcomplicated guidance and recommendations that aren’t applicable in the real world.
The reason a majority of generative AI projects fail is due to several factors. Failing to select an appropriate use case, overlooking complexity and edge cases, disregarding costs, ignoring manipulation risks, holding unrealistic expectations, and a host of other issues are key drivers of project failure. Far too many organizations expect generative AI to act like AGI and allow them to shed human resources, but this isn’t a reality today.
LLMs have their use cases, and these use cases increase if the cost of failure is low. So, the lower the risk, the larger the number of use cases. Pretty logical. Like most technology, the value from generative AI comes from selective use, not blanket use. Not every problem is best solved non-deterministically.
Another thing I find surprising is that a vast majority of generative AI projects are never benchmarked against other approaches. Other approaches may be better suited to the task, more explainable, and far more performant. If I had to take a guess, I would guess that this number is close to 0.
Generative AI and The Dumpster Fire Rodeo
Despite the shift in attitude toward generative AI and the obvious evidence of its limitations, we still have instances of companies forcing their employees to use generative AI due to a preconceived notion of a productivity explosion. Once again, ChatGPT isn’t AGI. This do everything with generative AI approach extends beyond regular users to developers, and it is here that negative impacts increase.
I’ve referred to the current push to make every application generative AI-powered as the Dumpster Fire Rodeo. Companies are rapidly churning out vulnerable AI-powered applications. Relatively rare vulnerabilities, such as remote code execution, are increasingly common. Applications can regularly be talked into taking actions the developer didn’t intend, and users can manipulate their way into elevated privileges and gain access to sensitive data they shouldn’t have access to. Hence, the dumpster fire analogy. Of course, this also extends to the fact that application performance can worsen with the application of generative AI.
The generalized nature of generative AI means that the same system making critical decisions inside of your application is the same one that gives you recipes in the style of Shakespeare. There is a nearly unlimited number of undocumented protocols that an attacker can use to manipulate applications implementing generative AI, and these are often not taken into consideration when building and deploying the application. The dumpster fire continues. Yippee Ki-Yay.
Conclusion
Despite the obvious downsides, the dumpster fire rodeo is far from over. There’s too much money riding on it. The reckless nature with which people deploy generative AI deep into systems continues. Rather than identifying an actual problem and applying generative AI to an appropriate use case, companies choose to marinade everything in it, hoping that a problem emerges. This is far from a winning strategy. Companies should be mindful of the risks and choose the right use cases to ensure success.
Weaved through the fabric of the hustle-bro culture, threaded with the drivel of influencers, lies one of the biggest cons of our current age. This is the false perception that everything we do has to be for some financial gain or public attention. With everything in life revolving around social currency or actual currency, removing friction enables us to reach value quickly. But don’t fret. The slop dealer is here with a plan to deliver us salvation, telling us that ideas are what’s important and everything else is pointless friction, needing to be optimized to reach full potential. Like so many things in our current moment, if only this were true.
Despite the decline in excitement for AI and the potential resulting market corrections, unfortunately, slop is here to stay. Although people outwardly complain about it, they are secretly glad it’s here. Being unique, thoughtful, and creative is hard. Slop allows people to swaddle themselves in a false comfort devoid of any real creativity. So, damn the torpedoes, full slop ahead.
Slop, Enshittification, and Brain Rot
Slop, enshittification, and brain rot are terms burned into our current lexicon. Although each term has a different definition, one referring to outputs, one referring to platforms, and one referring to what it does to us. When I use the generalized term slop here, I mean a mixture of all three together, a sort of thick, rancid mixture reminiscent of manure and White Zinfandel. This is because the combined term aligns better with the content and its overall impact.
The Slop Dealer
The slop dealer tells us everything is a hustle, and we need to get on board to reduce friction everywhere we can to accelerate value or be left in the dust by others using AI. They don’t talk of reasonable AI usage or prescriptions for specific tasks; it’s all or nothing. We need to surrender to the higher power. The slop dealer embodies everything that tech bro culture stands for. It’s the current equivalent of a get-rich-quick scheme, only instead of taking our money, they are stealing our attention and our satisfaction. Although sometimes they take our money too.
The slop dealer swindles us by telling us what we want to hear, that hard things are a thing of the past, and all we need is an idea. After all, everybody has ideas. These are the influencers, wanna-be influencers, and other AI useful idiots vomiting nonsense on social media. They aren’t peddling secret knowledge; they are peddling bullshit.
This pandering is done so we’ll follow them, subscribe to their newsletters, or buy their nonsense. But one of the biggest lies of all is the false impression that the value of creative pursuits lies in the end result.
Most of these people have no shame and not only believe in Dead Internet Theory, but also actively work to make it a reality. If you are wondering why people en masse find tech bro culture abhorrent, look no further than this stunning piece of work.
To quote this guy directly, “How I personally feel? I have no idea. The internet in my mind is already dead. I am the problem, right?” I get the impression this isn’t the first time he’s realized he’s the problem. Unfortunately, acknowledgement of this isn’t enough to change behavior.
The Slop Architect
The slop architect works not in traditional mediums but in ideas. To the slop architect, execution, skills, and experience are secondary, bowing at the pedestal of ideas. The fact is, most ideas are ill-thought-out, half-baked, or just plain fucking stupid. The slop architect doesn’t care because they don’t carry ideas to term; they birth them instantly, shoving them out into the world to fend for themselves as they move on to something else. I mean, the vape Tamagotchi was someone’s idea, too. Yes, please! Let’s accelerate these!
Ideas aren’t unique, precious resources, but common, run-of-the-mill, everyday occurrences for everyone on the planet. The slop architecture amplifies the fallacy that ideas are sacred and pushes the idea that if more ideas were executed, the world would be a better place. If only we had more apps, more books, more music, and the list goes on and on. This connects with people because everyone has ideas.
What most people who have thought about it for more than two seconds realize is that we don’t get to the value of an idea purely by having it. Ideas in isolation are senseless ramblings of the brain. Ideas forged and refined in the fire of execution, experience, and reflection are invaluable and fulfilling. Our ideas are never challenged in the slop architecture, leading us to new discoveries and paths, but are chucked out into the world and quickly discarded, like forgotten attempts at memes that nobody finds funny.
The AI Slop Architecture
The slop architect’s vision is implemented with the slop architecture, which presents itself as a process or application. The slop architecture is pitched as the way forward, the next-generation architecture fueling the future of humanity’s pursuits. But a simple scratch of the surface paint is all it takes to expose the entire thing as an empty shell.
When you see people pitching these types of things, it uncovers people who don’t understand creativity and certainly don’t understand where value exists in a process. Everything is a hustle for the sake of hustling. This person is hardly the only one.
Back in 2023, I jokingly created my own version of the slop architecture, which I referred to as IPIP, long before the AI influencers made it a reality.
This article was complete with a description of what would come to be known as vibe coding. “The hype has led to a new form of software development that appears to be more like casting a spell than developing software.”
Taking the slop architecture to heart, it’s not hard to find implementations already running. Books, slides, music, applications, nothing is off limits. Everything is fair game in the slop era.
Ah, Magic bookifier. Yeah, let me get on that. Any time someone puts magic in reference to AI, it’s bullshit.
People also fantasize about what advanced AI is or will be able to do. Take this use case for AGI, for example.
It reminds me of the Luke Skywalker meme where he’s handed the most powerful weapon in the galaxy and immediately points it at his face. This is informative for a couple of reasons. Movies can’t be exactly like the books for reasons other than length. They are different media with different tools. But look at the response. Human work isn’t worth protecting in the future. This is a far more common perspective than many think.
Even apps. It’s slop from all angles. So, if these tools already exist, why aren’t we all kicking back, receiving our profits? Maybe there’s something more to this than having an idea.
But we can’t just have a couple of people successfully making apps. It needs to be bigger! We are now told to await the arrival of the first billion-dollar solopreneur. Hark! The herald angels sing. Glory to the slop-born king! However, we shouldn’t get our hopes up. Setting aside how highly unlikely this is, people also win the lottery, so unless we have a mass of billion-dollar solopreneurs, it’s not proof of much. However, whenever people have strongly held beliefs, they will always point to exceptions as the rule.
It’s far more common for people to talk about a single person making a million-dollar app, and that we all can make them now. Even if this were true, it’s not like billions of people are going to make million-dollar apps or profit from a trillion new books. No degree in economics is necessary to see that the numbers don’t work. Besides, if billions of people can and will do something, then the whole enterprise becomes devalued.
The slop architecture deprives us of so much, sucking the soul out of activities until only the shriveled husk remains. There’s no learning with the slop architecture. No growth. No Reflection. No Satisfaction. It even robs us of a sense of style, something so foundational to the satisfaction of human artistic pursuits. But all things require sacrifice on the pyre of optimization. In the end, the slop architecture doesn’t democratize. It devalues, degrades, and destroys.
In the end, the slop architecture doesn’t democratize. It devalues, degrades, and destroys.
The Friction Is The Point
I’m going to let my friends in tech in on a secret, which isn’t a secret at all. The friction of an activity is directly related to the value you receive from it. The mistake being made is comparing an activity’s friction to the load time of an application or streamlining a user interface. I’ve written previously about how the next generation could be known as The Slop Generation and how we continue to devalue art. However, the removal of friction creates harmful follow-on effects.
Imagine telling Alex Honnold, “Dude, you don’t need to free solo El Capitan. We have a helicopter that can drop you off at the top.” People may see this example as silly because Alex obviously climbs mountains for reasons other than getting to the top, but it’s a mistake to assume other pursuits don’t contain similar value purely because they aren’t mountain climbing. Deep experiences don’t result from things that provide instant gratification or have little friction. Nobody finds meaning in a prompt or the resulting generation.
Deep experiences don’t result from things that provide instant gratification or have little friction.
People may see this example as silly because climbing a mountain without ropes is obviously different from something like writing a song. Except it’s not when viewed through the lens of experience. Alex Honnold doesn’t free solo mountains to get to the top or because ropes and safety equipment are too expensive; he does it because he knows there is value in the friction of his experience. He’s both challenging himself and learning about himself at the same time. He’s having an actual experience, which is hard to describe to people who have never had one. This experience enriches the conclusion of the activity, the accomplishment, which coincidentally happens to be getting to the top. However, when pursuits are framed in terms of the end results, it appears that reaching the top is the goal, and the removal of friction is logical.
Most people will never free solo a mountain, compete in the Olympics, or achieve any of the other remarkable feats that athletes at the top of their game accomplish, but that doesn’t mean we can’t have similar and fulfilling experiences, and we do this through exploration and conquering friction. When you are operating at the top of your game, you realize you aren’t competing with others, but yourself.
An artist puts a piece of themselves inside every work of art they create. AI deprives artists of having a piece of themselves included in the art, making the generated output purely an artifact of running a tool.
Slop Is Here To Stay
Immediately after Ozzy Osborne died, Oz Slop invaded social media. The prince of darkness himself fell victim to people’s boredom and lack of creativity. People chose to pay tribute to him, not through stories and anecdotes, but by slopping him into manufactured content. I can’t think of a more insulting way to pay tribute to an artist, but this is our future. Slop instead of something to say. Slop instead of stories and memories. Slop instead of emotion. Slop as a coping mechanism. May the slop be with you.
A disheartening thought is that no matter what happens to the market for generative AI, the slop will remain. People post this slop not because they enjoy it, but purely because it gives them something to post. Slop content is a stand-in for having something to say. It’s easy to generate and requires little thought, the perfect complement to today’s reactionary and performative social media environments.
In a way, this trend could create a new line of demarcation, where we start referring to things as “Before Slop” and “After Slop” to identify the creative expressions that preceded and followed the arrival of AI-generated content.
Conclusion
In the end, the slop architecture doesn’t generate experiences. Nobody is going to be on their deathbed mulling over their favorite prompts or sit down with friends and reminisce about the time they poked at a generative AI system for hours trying to get it to generate a particular image. The slop architecture doesn’t create a legacy or generate stories worth remembering or worth sharing, just pieces of forgotten garbage littering the digital landscape.
What’s the effect of exposing children to AI at a very young age? Well, we are about to find out. President Trump signed an executive order called Advancing Artificial Intelligence Education For American Youth, and, in the face of the other executive orders pushed by the administration, it may be tempting to consider this order relatively benign. I urge people to reconsider, because this order could result in catastrophic and irreparable damage to future generations of children. Move fast and break things is all well and good until the thing being broken is your child.
This move represents many of my fears coming to fruition, with all of the negative aspects I’ve been warning about becoming cemented into the foundation of future generations. You may have heard me talk about conditions such as cognitive atrophy, but early exposure to AI in education can lead to something far worse: cognitive non-development.
There are also technical concerns, including issues with security, privacy, alignment, and reliability. Children are rich sources of data wrapped up in easily manipulable packages, so it’s no surprise that tech companies are opening their AI tools to them. However, I feel these concerns are more evident to most people than the negative cognitive impacts that the introduction of AI to young children creates, especially while their brains are still developing and maturing. These are the issues I highlight here.
Key Points
Since this is a long article, I’ll call out a couple of key points:
Cognitive offloading by children and adolescents to AI short-circuits cognitive development impacting executive functions, logical thinking, and symbolic thought
We convert social to anti-social activities
The very skills kids need to use AI effectively never develop due to the overuse of AI
Core foundations of critical thinking, data literacy, and probability and statistics need to be introduced before any AI curriculum
Worldviews will be shaped by interactions with AI systems instead of knowledge, experience, and exploration
Kids need time to explore the generative intelligence inside their skulls
What Are The Hopes?
Before we begin, it’s helpful to take a step back and consider what the product of this education is supposed to look like. We envision emotionally balanced young adults exercising hardened critical thinking skills and ingenuity to create the next wave of high-tech gadgets. This is the stereotypical AI bro vision of an AI tide lifting all boats, but the reality strays far from the vibes.
There’s nothing fundamentally wrong with this perspective except that exposing children to AI tools beginning in kindergarten almost guarantees the opposite. This is for two primary reasons: the negative cognitive impacts on early childhood and adolescent development, and poor curriculum implementation.
Now, can this program succeed in a way that benefits children and empowers them for the future? Absolutely, but it would be nothing more than success by miracle. A program like this needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs and implements mitigations for these negative effects. This is NOT what we are getting here. This fails 999 times out of 1000, possibly more. Just read the wording of the executive order and imagine people rushing to implement it, along with the bros swarming like flies around a manure pile, anxious to pitch their half-baked products.
The introduction of AI and AI tools so early in childhood education will be yet another big mistake that everyone realizes in hindsight. To set the stage, many fail to realize just how much EdTech has been a failure, and now, without addressing any of the issues, we want to add even more screens in the classroom.
I don’t think everyone involved is a bad actor with perverse incentives. I think most people genuinely want to see children succeed and flourish. However, there is no consideration here for the long-term cognitive impacts on children.
AI In Education
While I was writing this article about AI in K-12, two other articles were released about AI in higher education. The article from New York Magazine about students using ChatGPT to cheat, and the story in Time of a teacher who quit teaching after nearly 20 years because of ChatGPT. The cheating article is creating a flurry of hot takes on social media. We’ve reached a technological tipping point where students don’t see the value in education. They want accomplishment and bragging rights (degrees) without effort. Apparently, attending an Ivy League school is no longer about the education you receive but the vibes you create and consume.
And of course, queue the defensive hot takes.
This is a common retort. The mistake of assuming low-quality Q&A for actual curiosity and insight. This information was available to us all along. It just required more friction to get. So, if this is the case, then the answers we wanted weren’t worth the effort. This is hardly an earth-shattering insight, yet we’re being pitched as though it is. Keep in mind, just because these people aren’t selling a product doesn’t mean they aren’t selling something.
As usual, Colin Fraser is on point.
A problem we’ve always faced is that we never know when we are learning something in the moment that will be valuable later. We exercise a stunning lack of current awareness for future value. This happens in all manner of experiences, but especially in education. Adults lack this awareness, and it’s completely delusional to expect that K-12 students will magically sprout this awareness.
We exercise a stunning lack of current awareness for future value.
There is value in learning things, even things you don’t use for your job. We seem to think learning is contained in individualized components that fit neatly into buckets, but there are no firewalls around these activities. Learning things in one subject is rewarding and beneficial, even to other subjects. Colin is also right about driving the cost of cheating to zero, a major point everyone seems to gloss over.
In his book, Seeing What Others Don’t, Gary Klein tells the story of Martin Chalfie walking into a casual lunchtime seminar at Columbia to hear a lecture outside his field of research. An hour later, he walked out with what turned out to be a million-dollar idea for a natural flashlight that would let him peer inside living organisms to watch their biological processes in action. In 2008, he received a Nobel Prize in Chemistry for his work. This insight doesn’t come from staying in your lane, being single-minded, or asking the right questions to an LLM. Yet, this is exactly the message thrust upon us. AI doesn’t provide the happy accidents that result from exploration and the randomness of life.
Using AI instead of our brains gives us the illusion of being more knowledgeable without actually being more knowledgeable. We shouldn’t underestimate the power of this illusion because it blinds us to certain realities. AI offers an illusion that completing tasks and knowledge acquisition are the same thing, but knowledgeable and productive are completely different attributes. This positive feeling of being more productive masks that we aren’t acquiring knowledge. Numbers end up overshadowing quality, and productivity vibes end up trumping learning.
Some may argue that productive is preferable to knowledgeable in a business context, but that hardly applies in education. The ultimate goal in formal education is to learn, not produce, with the PhD being the exception. Education shouldn’t be about creating useful automatons, despite how many business leaders may want them.
AI In K-12
Introduction in K-12 means that these tools are introduced during critical brain development and could short-circuit the development and maturation of things such as executive functions, logical thinking, and symbolic thought as students offload problems to AI systems. Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools. No matter what the AI bro impulses, we should all agree that exposing kindergarteners to AI is an incredibly bad idea.
Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools.
All of the issues and negative impacts I’ve been pointing out, such as the cognitive illusions created by the personas of personal AI, along with associated impacts such as dependence, dehumanization, devaluation, and disconnection, get far worse when exposed early in childhood and adolescent development because children never discover any other way. Blasting children with AI technology in their most formative years of brain development pretty much guarantees lifelong dependence on the technology. Something that elicits drooling at AI companies, but is hardly in the best interest of human users. What we consider overreliance today will be normal daily use for them. Worldviews will be shaped not by knowledge and experience, but by interactions with AI systems.
There’s something fairly dystopian about prioritizing AI literacy while actual literacy is on the decline , disarming future students from the very skills they’d need to keep AI in check. The impression seems to be that if you can teach kids AI, you can negate negative downturns in literacy. After all, why should something like reading comprehension matter if tools provide the comprehension for us through a mediation layer? Hell, why stop there? Why not apply AI to every task that could possibly be outsourced? We are close to creating a world where raw data and experiences never hit us.
The Future Isn’t Now
In their book AI 2041: Ten Visions for Our Future, Kai-Fu Lee and Chen Qiufan have a story about children who grow up and go through school with companion chatbots to assist them in life. These chatbots adapt to them and assist them in areas where they have challenges. AI systems are ever-present companions following them through school and in life. The story is meant to have the trappings of utopia, but ends up sounding like a dystopian hellscape. To make matters worse, their story considers a perfected AI system that doesn’t have all the issues and drawbacks of today’s AI systems.
We continue to make the mistake of treating the AI systems of today as though they are the AI systems of tomorrow. Encouraged into hyperstition and thought exercises of, “It doesn’t work, but just imagine if it did!” To say that AI will cure cancer and become the cure for all of humanity’s ails may likely turn out to be true, at some point. But these accomplishments have yet to come to fruition, and don’t appear on the horizon either. So, why are we treating these systems as if they’ve already accomplished goals they haven’t? The highly capable tutor/companions of Lee and Qiufan don’t exist, yet we want to apply this non-existent vision to K-12 education as though they do. Even if they did exist, where is all this highly personalized data about your child being stored, and what is being done with it?
Less Capable, More Dependent, and Less Stable
The crux of the issue is that this program will not set kids up for success in an AI world or otherwise. This early exposure will make them less capable, more dependent, and less stable. This curriculum could teach kids all the wrong things, such as that answers can be immediate and simple, and that working out a problem isn’t as important as asking the right questions. We also teach that learning is comfortable. We give the impression that knowing things is not as important as knowing where things are stored. This is all bullshit. Kids can’t summarize their way to knowledge. But, it gets worse.
Children exposed this early never learn how to do things for themselves. They end up outsourcing problems and decisions to AI. Instead of taking feedback on how to solve problems, challenging themselves to learn, they offload the problem to AI, making them incapable and lacking confidence in the absence of technology.
This technology dependence also creeps into their personal lives, meaning going about their typical day becomes unbearable without the ability to mediate through AI. It becomes a source of authority for them and a way to avoid difficult decisions that teach them lessons. It can be hard for us to imagine today the future paralysis created when the technology is absent, even for simple decisions like how to respond to a friend’s message or whether to go outside today.
Many adults may argue that this is a small price to pay for setting kids up for success in the future. There are two flaws here. First of all, this is a monumental price. Second, using technology more doesn’t automatically mean being better at using it. For AI use, the skills you learn outside of AI’s mediation are exactly the skills that make you better at using it.
We need to focus on teaching kids to use their brains, something I never thought I’d have to say when talking about… school.
This is typically when someone brings up the calculator, insinuating that nobody needs to learn math because it exists. Although I disagree, confusing a calculator with AI technology is a mental mistake. Calculators and AI are far from being similar technologies. A calculator isn’t a generalized technology that can be applied to many problem spaces. A calculator doesn’t provide recommendations, advice, or sycophantic outputs. It won’t tell you who to date or be friends with. Oh, and a calculator is always right, unlike AI.
The hypothetical response that gets pitched around is imagining if Einstein or Von Neumann had access to AI and all of the wonderful things that would have sprouted from their genius. Maybe, however, I pose a different experiment. Imagine if Einstein or Von Neumann were a product of AI education from a very early age, where even inane curiosities were immediately satiated by an oracle. The likely output is that nobody would know their names today. We are products of our environments. Remember, there are no happy accidents with AI, only dense data distributions in which everything is shoved. In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
Avoiding Discomfort
Sam Williams from the University of Iowa said, “Now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.” We are looking to apply this in K-12, specifically when we want students to grow.
The truth is, knowledge acquisition isn’t comfortable, and students avoid discomfort like the plague. When we use AI to complete assignments, we aren’t challenging ourselves. We aren’t developing our own perspective and forming new connections between concepts. Students find writing uncomfortable and are quick to outsource to AI, but writing truly is thinking. When we write, we are confronted with our thoughts and perspectives, challenging ourselves and forming new insights. One realization with writing is that the more you do it, the better you get. This realization never comes when it’s constantly outsourced to technology.
Using AI for work-related tasks may be helpful, but using AI for education or even life is idiotic. Yet, we continue to make these foundational mental mistakes. This would be like saying that since Taylorism worked for business, why not apply it to daily life? We all know where that leads.
But we also end up robbing students of a sense of accomplishment and fulfillment, of a long-lasting sense of satisfaction, not to mention the ability to focus. And for what? Because we believe that children will need to be non-thinking automatons to have a chance in the future? This theft will have a lasting impact on the mental health of future generations.
We may experience the extinction of the flow state by never allowing people to enter it in the first place. I’ve heard people argue that they’ve entered a flow state using AI, maybe, but likely the very nature of using AI to complete tasks guarantees that you never enter a flow state. Either people are confused about what a flow state is, or they mistake the illusion of productivity for creativity and flow.
As Ted Chiang mentioned in an article I’ve referenced before, ”Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Going to the gym isn’t comfortable, but the results are physically and mentally rewarding. The mental health benefits of going to the gym aren’t intuitive. After all, how can running on a treadmill or lifting weights, activities that work out your muscles, benefit your mental state? Yet, it does. There are no firewalls around exercise either. Knowing this doesn’t stop us from making the same mistakes in cognitive areas.
When Playing It Safe Becomes The Norm
Using AI to do things is perceived as safe because if the output is wrong, we can blame the AI, versus having to work out a problem ourselves and potentially being wrong. There’s a blame layer between us and the problem.
Let’s take art, for instance. AI art is safe, unchallenging, and unfulfilling, providing no opportunity to learn about ourselves, others, or the world. And yet, the very fact that it’s safe and easy is what makes it so attractive. Failure can result from the paintbrush, but never the prompt.
Failure can result from the paintbrush, but never the prompt.
The best things in life come from not playing it safe. Taking a chance on a job, moving to a new location, or asking a person out on a date are all activities that aren’t safe, but they can end up being the best decisions we’ve ever made. We need to keep this instinct alive in children.
Lack of Resiliency
The more we rely on AI, the less we question its outputs. The more we use AI and our capabilities atrophy, the less capable we become of questioning the outputs and, hence, the more dependent we become. We end up losing a critical capability when we need it the most, or in the case of early childhood exposure, never develop it in the first place.
Modern generative AI is far from error-free. It makes frequent mistakes and hallucinates. Students must construct the cognitive fitness necessary to operate robustly using a technology that makes these frequent mistakes. This fitness isn’t built on a foundation of the same AI that has these issues.
Students also need a foundation and the ability to explore outside AI mediation. This requires both time and foundational courses and concepts. For example, this foundation should include critical thinking, data literacy, and probability and statistics. Early exposure to these concepts with late exposure to AI offers the best chances for students to build this robustness.
From Social to Anti-Social
AI is a fundamentally anti-social technology. From the ground up, we are removing the human and converting it to the non-human. Even social networks are transforming into anti-social networks. With AI’s overuse in children we teach kids that humans are second-class citizens to AI. After all, the sales pitch is that AIs are better at everything, so why should children believe otherwise?
Handing kids an oracle to ask questions not only converts a social activity into an anti-social activity but also shifts authority away from humans and onto technology. This shift would still be bad even if the technology were perfected, but it is far worse given the error-prone technology of today.
Young children are quick to anthropomorphize and will form a bond with non-human companions. Although the video of the little girl not wanting to play with the shitty AI gadget is funny, it won’t last when children are surrounded by AI. Kids will switch from actively using their imagination to becoming passive consumers of AI output.
The human retreat has already begun, as kids prefer interactions with friends mediated by a device. But now tech companies want to take this further. This is all happening outside of education, but kids can’t avoid forced interactions with their companion/tutor/friend/bot in the classroom, reinforcing this retreat.
Much of this slide comes from our tendency to oversimplify, not accounting for the bigger picture and the complexities involved. Take, for instance, a common claim that kids ask many questions, and since AIs never tire of answering them, pairing kids with AI is a natural fit. This seems like an almost throwaway point, a gotcha to any potential critic, but people making this point haven’t thought it through.
First of all, asking questions is a social activity. We interact with other humans in different environments, learning far more than the simple answer to our questions. This activity teaches us essential skills, including ones related to non-verbal communication. Humans also don’t answer questions the same way AIs do, often providing additional context and anecdotes that may further aid us in knowledge acquisition and retention.
This act connects us to other people and the world, making us active participants in something bigger rather than passively consuming an answer. I still remember anecdotes shared from my high school chemistry teacher that stick with me today. We don’t just lose context and perspective from an AI oracle, we lose something human.
When it comes to context, any expert who has asked AI questions about their topic area has been confronted with incorrect information, including something like, “I guess that’s technically true, but it’s hardly the whole story.” And this is what we want to make the norm.
Closing The Curiosity Gap
We are told that asking an AI questions makes people more curious, but AI closes the curiosity gap. By getting an instant answer, we satiate our curiosity and move on to the next thing, only digging deeper or exploring further in cases of pure necessity. This act reinforces low attention spans, further reducing the ability to focus. At some point, System 2 may become extinct. What kind of world will that create, where the world is nothing but hot takes and vibes?
AI satisfies a need for quick answers. However, searching for answers in a more traditional way means other pieces of valuable context surround you. Other rich pieces of information that lead to new ideas and new understanding. Humans have an evolutionary need for exploration.
When using AI for exploration, you are never exposed to ideas and concepts you don’t want to be exposed to. I don’t think we fully grasp just how much of an impact this selection bias will have on the future.
Sure, there are situations where a quick answer is perfectly fine, mundane things like what time a movie starts or what temperature to set your oven to cook a pie. The mistake here is assuming these situations apply evenly to all problem spaces, especially knowledge creation.
My Recommendations
Despite the many unknowns, we shouldn’t shut the door to new innovations because we could slam the door to new solutions. Although it doesn’t exist today, a robust tutoring bot focused on a single purpose and specific subjects could benefit students. The message here isn’t to discard everything but to be cautious, knowing there are tradeoffs and downsides, and incorporate mitigations.
For a program such as this to be successful, it needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs. Without this, you have no way of telling whether you are helping or harming until it’s too late. There is no way to succeed without this step. Beyond this up-front work, I’ll make four other suggestions.
Avoid Early Exposure
Students need plenty of time to develop their brains, not technology. Early exposure should be avoided at all costs. Exposure to this curriculum should happen in high school, preferably in the last two years, not earlier. This is typically when vocational education programs were introduced in schools as well. This gap gives students time to develop skills and experiences outside AI influence and mediation. Kids adapt to technology quickly, so this later exposure will not stunt their capabilities when tools are introduced.
Create A Prior Solid Foundation
Before introducing the AI curriculum, a solid foundation in various topics should be established. This foundation should include courses in critical thinking, data literacy, and probability and statistics. These courses and concepts have been sorely lacking in K-12 education today, and their introduction is long overdue. Arming students with this foundational knowledge will allow them to question the outputs of these systems and create defenses for cognitive creep.
Smart Implementation
The implementation of the courses should be isolated and away from other topics. AI shouldn’t be woven into every topic with a tie-in. Although some would argue that an effective AI tutor could help students struggling with certain subjects, these systems have yet to be developed, much less proven effective. In almost all cases, the AI would be used as an oracle, providing answers directly instead of the necessary understanding and even discomfort that helps students grow.
Solid Curriculum
The curriculum should focus on challenging students, not giving answers. Kids often don’t realize when challenges are beneficial to them. AI tools should continue to be viewed purely as tools, not oracles or companions. The curriculum should focus on avoiding usage as personas and teaching kids how to think in terms of solutions. Appropriate labs should be constructed that give students the ability to explore concepts and define solutions, pulling AI tools in secondarily to complete the tasks and realize a student’s vision. This way, there is a separation between the mental approach and the AI components.
Final Thought
Ultimately, we may end up with anti-social, dependent, and unstable young adults. We take so many skills for granted, skills we don’t realize we developed and honed in school, and now we want to apply technology to optimize these attributes away. We need to give future generations a chance to allow their brains to develop outside of AI mediation. Here’s something to consider.
Imagine an art teacher standing in front of a class. The students aren’t in front of an easel or grasping a pencil, but sitting in front of computers. They aren’t using their hands and tools to create a vision that originates from their minds. Instead, their fingers clack on the keyboard and echo through the class as the teacher instructs them to be more descriptive and provide pleasantries to the machines. Is this really the world we want to immerse children in?
We are moving toward an existence where raw data and experience never hit us as everything becomes mediated. We prefer optimization over expertise. I’m sure the illiterate masses of the Middle Ages felt powerful after leaving a sermon by the literate priest mediating the message of the written word, but that was hardly the best state for individuals. Now we are applying this logic to AI with far-reaching consequences for the everyday life of an entire generation.
In the words of Aldous Huxley, many may mature to “love their servitude,” preferring optimization and rigid structures that take decisions off the table, making things easy, not requiring thought. In Zamyatin’s We, most inhabitants enjoyed living in One State with its rules, schedule, and transparent housing. They were happy to trade free thought and experiences for optimization, comfort, and structure. It needs to be said, over and over again: These are dystopias, not roadmaps.
In just a few short years, we’ll not only have achieved AGI but live in a world of abundance. Physical goods will be so cheap that they’ll basically be free, and we’ll be able to 3D print anything we’d like. One can only assume with their free 3D printer. We’ll connect our brains to the cloud and have seemingly endless compute, which will also essentially be free. We’ll have cured our illnesses, created replicants, and become immortal. This is but a taste of the nonsense peddled by Ray Kurzweil.
In his new book, The Singularity is Nearer: When We Merge with AI, he pushes a couple of themes. One is that all technological advancements will be universally positive. Second, the only way humans can ever hope to compete is by fully merging with technology. I have issues with both of these themes. This post attempts to address only a tiny amount of the BS in the book.
Ray Kurzweil
If you are unfamiliar with Ray Kurzweil, he’s someone propped up by many as the preeminent futurist. I recently caught one of his appearances, and his ramblings elicited a noticeable grimace from me. I must admit, I wasn’t familiar with his unique brand of absurdity. I knew he’d said some wacky stuff in the past and had a book about the singularity, but I didn’t pay him much attention. After this interview, I purchased his new book, The Singularity Is Nearer: When We Merge with AI.
The strange thing about Kurzweil is the way people treat him during interviews. I haven’t really seen anyone push him on his asininity. When interviewers attempt to question him, he makes up more nonsense and says things like if people can live 20 more years, they’ll be able to live indefinitely or that things will be free in the future, avoiding the question altogether.
The book demonstrates how disconnected and out of touch Kurzweil is from reality. But it also highlights a bigger problem. As long as tech is allowed to be presented as magic, charlatans and hucksters will run rampant. This is the playbook that Kurzweil exploits. Unfortunately, I don’t have the time to address all of the issues with the book, but I will point out a few things that stood out to me.
As long as tech is allowed to be presented as magic, charlatans and hucksters will run rampant.
Before We Start
A few thoughts before we begin. If you read the reviews of this book, they are overwhelmingly positive. I’m sure many people won’t care for this post. Kurzweil is a famous tech personality with multiple books, TV appearances, and impressive credentials. I’m a nobody security researcher. I’ll never be in demand like Kurzweil or sell as many books as him, so I’ll have to cry myself to sleep at night with my integrity intact.
After all, he was just listed on the Time100/AI list, which caused me to laugh out loud. Then again, we live in a performative age, and Kurzweil is a performer.
However, I’ve spent my entire career analyzing risks and envisioning future threats to technology, something Kurzweil is oblivious to or completely ignores. Neither is a good scenario.
It’s also important to know that Kurzweil has been wrong many, many times before. I stumbled upon this old Newsweek article from 2009, which had an amazing quote.
P. Z. Myers, a biologist at the University of Minnesota, Morris, who has used his blog to poke fun at Kurzweil and other armchair futurists who, according to Myers, rely on junk science and don't understand basic biology. "I am completely baffled by Kurzweil's popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination," writes Myers. He says Kurzweil's Singularity theories are closer to a deluded religious movement than they are to science. "It's a New Age spiritualism—that's all it is," Myers says. "Even geeks want to find God somewhere, and Kurzweil provides it for them."
The author even made a midlife crisis joke and another person accused him of trying to start a religion. Fifteen years, and not much has changed.
Let me also say that given enough time and technological progress, just about anything is possible. I think this is something that everyone innately knows. However, people like Kurzweil exploit this instinct for their benefit, running up the clock and leveraging the hype. We should be aware of this trick when evaluating claims.
Why Write This?
You might ask, why would I dedicate time to writing this article out of all the other things I could be writing? Indeed, I’d rather be writing something else, but as I was sketching my thoughts for this post, I read an article with the following quote.
“A colleague of mine, without a hint of irony, claimed that because of AI, high school education would be obsolete within five years, and that by 2029 we would live in an egalitarian paradise, free from menial labor. This prediction, inspired by Ray Kurzweil’s forecast of the “AI Singularity,” suggests a future brimming with utopian promises.”
THIS is why I’m writing it. These predictions powered by Kurzweil are fabricated bullshit. Let me go on record and say we won’t have AGI by 2029 or a utopia. Now, I’m not delusional in thinking that I would have nearly the reach needed to make a dent in Kurzweil’s impact, but I’ll reach a few people and get this off my chest. So, let’s dive in and call it like it is.
Kurzweil is a BS Artist
If I had to summarize The Singularity Is Nearer, I’d say it’s the ramblings of an aging gentleman confronted with his mortality, hoping that wishful thinking and vibes are enough to speed the tech he imagines into existence. It’s a book of absurdity wrapped in historical and disconnected examples attempting to give Kurzweil’s bullshit credibility. Even the title of the book is a sleight of hand. Sure, everything is nearer than when his previous book was written, but that doesn’t mean it’s close.
Another obvious fact on display is that if Kurzweil found himself down at the crossroads, we know exactly what he’d sell his soul for. He wants to become a robot so badly that he’s willing to shed every bit of his humanity to get it. Oddly enough, this doesn’t seem to be the bright, wavy red flag it should be. He’s so scared of death that he’s willing to discontinue being human for a small taste of an extended life.
Why Are People Convinced?
So, if my statements are true, then why is the book so convincing and the reviews universally positive? It’s not because people are stupid, but something far more simple. The book doesn’t continually and all at once slap you in the face with his flatulence. The pungent aroma is layered between positive messages (a utopia, immortality, etc.), topics that have nothing to do with this title, and historical examples of technological progress. This layering is a mental sleight of hand that has a reinforcing effect. Let me give you an example.
Imagine I wrote a book claiming that within ten years, humans would be exploring and populating the cosmos outside of our solar system. Rather than go into the specifics of my claim and address the real risks and challenges, I spend most of the pages talking about other things. I discuss at length the history of NASA and the challenges conquered to put humans on the moon, all in the span of a decade. I talk about the potential of solar sails and other propulsion technologies. I even go off on a tangent imagining the impact on humanity of having a working Dyson sphere. Kurzweil employed the same distraction techniques instead of making points or providing supporting evidence in his book.
The book itself has little to do with its title. He dedicates only a small portion of the text to this topic. He really wants you to know that, based on his vibes, utopia, and immortality are just a few years away. Kurzweil claims we’ll have a utopia in the next 20 years. It’s an easy sell since many people reading his book will still be alive. The entire book consists of telling people what they want to hear. He spends no time talking about challenges or issues. He knows that you sell a lot more books telling people what they want to hear rather than confronting hard truths. This sums up so much of our current age.
He knows that you sell a lot more books telling people what they want to hear rather than confronting hard truths.
That said, the book is an informative glimpse into the mindset of a certain type of person. These would be people with the transhumanist, posthumanist, or e/acc mindset. So much of the transhumanist argument is framed around making us better humans, but it’s really about making us into machines. I’m sure Kurzweil believes he describes a utopia. But like so many utopias, it’s just a thin layer of cheap wallpaper over a dystopia.
So much of the transhumanist argument is framed around making us better humans, but it’s really about making us into machines.
Disconnected From Reality
Kurzweil gives some of the most absurd examples in this book, proving that he has no idea how the world works and is disconnected from reality altogether. For example, after connecting our brains to the cloud, he imagines entertainment where we don’t merely watch a movie but feel the actor’s complex and disorganized emotions. Uhm… Can someone please tell him that actors are… well… acting? He doesn’t seem to realize that people acting in movies are expressing emotions, not feeling emotions. When we insist a tortured character in a movie needs to actually be tortured for entertainment, whose utopia are we living in?
Virtual Experiences
The book has an obsession with virtual experiences. He imagines scenarios such as a virtual beach vacation for your family, where you have the sights and smells of an actual beach. Nothing like taking a vacation with your family while, in reality, not taking a vacation. It reminds me of the company Rekal from the Philip K. Dick short story We Can Remember It For You Wholesale, which became the movie Total Recall for those who never read the story. I don’t know what it is with these people who seem to look at a dystopian SciFi and say, “Yes, that’s the technology we need.” These are cheap illusions that don’t have the impact of the real thing, but not to Kurzweil.
He claims simulations will be so good that there will be no point in doing the real thing and uses the example of climbing Mount Everest, which demonstrates he doesn’t understand what stakes are or what the point of doing something challenging is in the first place. In many cases, the point of performing the activity is the friction and difficulty. We just had the Olympics. Imagine telling Simone Biles, “Why put all of that hard work into competition? Soon, you’ll be able to ‘experience’ Olympic competition.” What Kurzweil doesn’t understand is that when experiences become easy, they lose their value.
When experiences become easy, they lose their value.
Confronted With His Aging
In many ways Kurzweil is keenly aware of his aging. This is obvious in his obsession with simple technology like replicants, which are merely trained on your writings. He discusses the replicant he made of his deceased father and rambles on about how fooled he was by his creation. This experiment was supposed to demonstrate the impressive capabilities of today’s technology, but it ended up just being sad.
However, what’s the point of having a chatbot trained on your writings and other material that exists after you pass away? It’s not you. It doesn’t have your identity or your true thoughts, nor does it encapsulate the complexities that make up your true identity. Even if you could create a more exact replicant, what’s the point? It still won’t be you. It could be a perfect copy of you, but it isn’t you. This is the kind of thing a narcissist would want. I don’t want a copy of myself running around, and I’m sure the world thanks me for that.
When you think more deeply about them, replicants have another problem. They are a get-out-of-jail-free card for not doing the right thing. Why spend time with your loved ones when they are alive if you can create a cheap copy to chat with at your convenience after they are gone? More time doing what you want and less time spending with the ones you love.
Things Will Cost Nothing
Not only will things be better in the future, goods will basically cost nothing. Kurzweil says that everything will become information technology, and the cost will go to zero or nearly zero, even basic necessities like food and clothing. He uses this transformation to say people won’t fight over resources anymore and uses a silly example, such as people fighting over a PDF. The whole premise is absurd. Vertical farming won’t drive food costs to zero, and people will fight over information. People get into fights over social media posts all the time.
Speaking of social media, he makes more ridiculous claims about social media and the cost/value tradeoffs. For example, he says it costs companies like Facebook, Google, and TikTok nothing after they’ve built their infrastructure, suspiciously omitting the energy costs and maintenance to run the infrastructure and the veritable army of people these organizations employ. He justifies his claim by stating that there’s no difference in cost between connecting you to a hundred people or a thousand people, as though the connection between people is where the cost is, but that’s not the stupidest part.
He says that if you could make $20 mowing a lawn but choose to spend that time on TikTok instead, then TikTok is worth $20 to you. This is asinine. Not every action you take in life is in service to make money, and not every free moment is a lost opportunity, either. So, in Kurzweil’s logic, if you could make $5 on Fiverr by designing a logo for someone but decide to sleep instead, then sleep is worth $5. You could make that $5 the next morning with no money lost. None of this even considers algorithms, the addictive nature of social media, and humans just wasting time.
Another spit-take moment is his discussion of radical life extension technology, which he states will not be available solely to the wealthy but also to the less fortunate worldwide. To prove this point, he uses the mobile phone as an example. Nope, you read that right.
Kurzweil says that since most people on the planet have a mobile phone, radical life extension technology will be available to them in much the same way due to extremely low cost. However, I think the mobile phone analogy is worth a deeper look. There’s a big difference between the iPhone in my pocket and an adware-riddled cheap cell phone subsidized by some company squeezing every drop of data from a user that it can. Tack onto this subsidized connectivity like Facebook’s Free Basics program meant to provide free internet to users in developing countries, which ultimately traps them in a Facebook hellscape, and you have the blueprint for something fairly dystopian.
Continuing his cost-nothing crusade, Kurzweil states that using robotics, cheap energy, and automation to replace labor outright in the 2030s would make it relatively inexpensive to live at a level considered luxurious. Telling people things will be cheaper, but you won’t be able to afford them because you don’t have a job is a contradiction that apparently didn’t dawn on him when he wrote that passage.
And… I’m not even going to get into his Bitcoin comments.
Jobs and Wages
Kurzweil has odd claims about jobs and wages. For example, he claims that more jobs will be created than lost, but he can’t answer what those jobs will be because they haven’t been invented yet. He uses examples like farming and the textile industry to prove his point. But this doesn’t make sense since AI is a far more generalized technology than a tractor or the power loom and can cross many different industries.
On wage stagnation, he boasts about how stagnated wages can buy more compute. Imagine that conversation with your family when having to skip a meal because you can’t afford food. “I know you are hungry, kids, but just think about how much more compute we have!”
I know you are hungry, kids, but just think about how much more compute we have!
One of Kurzweil’s favorite scare tactics is claiming there won’t be jobs for unenhanced humans and stating that until we fully merge with AI, there will be almost no jobs left. He makes multiple claims throughout the book on this point, saying biological brains cannot keep up with non-biological precision nanoengineering. Whatever the f—k that word salad means. This is another one of Kurzweil’s tactics on display. He knows most people know nothing about nanoengineering, so he bloviates on the topic. For good measure, he also mentions a world where we watch political ads or share personal data to get free nano-manufactured products. Ah, yes. The utopia we were all hoping for.
When it comes to automation replacing and disrupting the job market, he brings up a silver lining. The gig economy. He mentions the gig economy offers people more flexibility, autonomy, and leisure time. Kurzweil is so out of touch he doesn’t realize these aren’t the same thing. Once again, imagine that conversation. Telling someone who delivers for DoorDash, “Sure, you don’t have a regular job that pays well enough or has benefits, but isn’t all that leisure time great?” When you can’t pay your bills, downtime isn’t leisure time.
When you can’t pay your bills, downtime isn’t leisure time.
Being Human
In one part of the book, he questions what being human even means when introducing non-biological components and brain-computer interfaces. This is actually a great question, which, of course, Kurzweil doesn’t answer. Instead of answering, he vomits more of his pontification about inevitability, saying the non-biological component will grow exponentially while our biological intelligence will stay the same, providing a more specific prediction that in the 2030’s our thinking itself will be largely non-biological. Kurzweil has a way of stating questions as though he’ll answer them but never answering them. This is how a con artist operates, appearing to be upfront.
It should be obvious to anyone reading the book that Kurzweil really doesn’t like being human and yearns for the day to transform into something else. It doesn’t even matter to him what he becomes as long as it isn’t human.
For example, it’s uncomfortable (but necessary) to think about how replacing our biological components with synthetic ones may change us, especially when it’s not for the better. Instead of addressing this complicated reality, he makes the point that we remain the same person despite our cells going through a replacement process and our brains being almost completely replaced over the span of a few months. The implication he hopes you draw is that this non-biological replacement shouldn’t bother us. Once again, more absurdity.
Bodily regenerative processes are not the same as a wholesale replacement by synthetic alternatives. This holds true for both physical and cognitive functions. This irritates me to no end, and it’s one of the most obvious flaws in his logic. Kurzweil hopes to smother us with a pillow while he whispers, “Just let the singularity happen.”
No Downsides
One of the most apparent aspects to readers of the book is Kurzweil’s failure to mention nearly any negative aspects or potential adverse outcomes in his book. Either he’s oblivious to them or feels that adverse outcomes don’t align with his message. My guess is it’s a mixture of both.
I’ve discussed many of these downsides already, but one in particular is his presentation of simulation and self-driving cars as though they’re magic. To support this, he mentions the success of companies like Waymo. There is never a mention of Waymo’s issues, such as how these cars have been found driving down the wrong side of the road or mysteriously honking their horns. We don’t have capable Level 5 self-driving cars on the road today, and this problem is not solved. Every company working on self-driving features, from Waymo to Tesla, has issues they cannot solve today.
These are undoubtedly solvable issues, and we will have full driverless technology in the future, possibly even in the near future, but today, these companies can’t solve the problems. It’s undoubtedly disingenuous to talk about driverless cars as though they are a solved problem today.
Okay Not Knowing
Another of Kurzweil’s comfortabilities is agreeableness to not knowing how AI works or comes to its conclusions. He mentions that we may not know or understand even if explanations were provided. It’s odd that he mentions this while talking about the judicial system, an area that’s been plagued with algorithmic issues. Even outside of the judicial system and policing, there have been so many instances where algorithms have unfairly discriminated against people, denying them benefits and even entry into schools. Recently, it was announced that Nevada will use Google’s AI to determine whether people get benefits. People have a right to know why they were denied benefits, and it can’t be, “because the algorithm says so.”
Imagine an air traffic control AI that instructs pilots to fly figure 8’s around the airport before landing. Will we question this or receive it as some sort of hidden knowledge that the AI system has that we can’t fathom? This would be an obvious example that the system has an issue, but countless hidden issues wouldn’t surface in the same way. When we don’t understand how a system came to its conclusions, we set ourselves up for confounders to run rampant.
As I read the section on the judicial system, I wondered how you would ever get a fair trial by jury in the future. When everyone is permanently connected and has access to data that biases them, it may be possible for anyone to get away with a crime purely by spending enough money to taint the data. Or will you be forced to install the JuryBlocker software directly into your cognitive processes? I’m sure Kurzweil would think this thought exercise is silly because the goal is to remove humans from the judicial process altogether, but as we know, we don’t live in a perfect world. Our technology is rarely that good, and humans have a habit of not making the right decisions.
Not The Whole Story
There were so many parts of the book where Kurzweil would bring something up, and I’d be left with the thought, “That’s not the whole story.”
For example, he references things like ChatGPT passing the Bar Exam or AlphaGo beating the best Go player in the world but never tells the whole story. For example, when ChatGPT passed the Bar exam, it also passed other similar standardized tests. Researchers reworded the questions to ask the same question differently, and ChatGPT failed, proving that it had memorized data in its training data. Kurzweil wants you to believe that because of this, a lawyer’s days as a profession are numbered, but his exercise misses the more significant point that lawyers don’t sit around answering Bar exam questions all day.
AlphaGo was a truly amazing accomplishment, but Kurzweil leaves out that even average Go players can beat superhuman Go AIs these days. They can exploit these systems through adversarial policy attacks. These attacks are highly concerning if the technology is deployed in high-risk scenarios outside the game of Go.
In his discussion about disruption from AI, he claims that sometimes there aren’t any losers. For example, a revenue stream from treating a particular illness. He says there are many areas of technological change where losers don’t exist and gives the example of creating a cure for a disease. In this scenario, companies and individuals lose a long-term stream of revenue. This is another one of those sleight-of-hand things Kurzweil does. The cure is indeed more beneficial to society, but that’s certainly not how things play out in practice as large pharmaceutical companies hang on to revenue streams. No matter how cloud-connected your brain is, you won’t be able to compete with large organizations and a mobilized workforce. There may be occasional exceptions, but it’s hardly the rule.
Conclusion
This lengthy post didn’t scratch the surface of the nonsense hawked by Ray Kurzweil in his book. There are so many points I take issue with, most specifically the arguments of inevitability and the aggressive timelines he’s attached. Given all of his bullshit, you might be surprised that prominent people continue to hold him in high regard, but I’m not. Kurzweil’s tech spirituality aligns with their larger goals.
We need to ask more serious questions of people trying to sell us things, even those selling us ideas, because these things have consequences.
Look For Yourself
Before writing this article, I didn’t look for other articles or takedowns of Ray Kurzweil. I didn’t want these pieces to taint my impressions of the claims made in the book. The only exception was the Newsweek article from 2009, which I stumbled upon while looking up a specific piece of information about him. After writing the article, I was curious about what others had to say, and I promised myself I wouldn’t go back and reword anything in this article based on what I had read.
If you are still unconvinced, I’ve highlighted a few articles you can read for yourself below. Some of these are older articles, proving that nothing has changed. These are worth the read.
How Ray Kurzweil Sells His Junk Science – Geoffrey James – June 17, 2010 This is an old article, but it’s worth the read for the rules of selling junk science.
Ray Kurzweil Does Not Understand the Brain – PZ Myers – August 17, 2010 This is PZ Myers smashing Kurzweil for making the claim that by 2020, we’ll have reversed engineered the human brain. Obviously, that didn’t happen.
The singularity is not near: The intellectual fraud of the “Singularitarians” – Corey Pein – May 13, 2018. This article has an amazing quote. “Science begins with doubt. Everything else is sales.” This is something we should all keep in mind as we blindly take as fact the drivel of the AI Hype Bros. For some more notable bangers by Corey Pein, check out Cyborg Soothsayers of the High-Tech Hogwash Emporia. “Ray Kurzweil’s Singularity is an overheated white paper by a zealot for the American dream of luxury and convenience.” There were a whole lot of references to this type of thing in the book.
Anyone digging even mildly beneath the surface will see that Ray Kurzweil is a charlatan and a huckster. He’s not someone to be taken seriously. Despite this, many tech people will continue to genuflect for Kurzweil because he says what they want to hear. I also mentioned in a previous post that in our short attention span existence, we reward people for being bold, not for being accurate. Something that Kurzweil happily exploits. Welcome to the age of post-reality.
You might wonder why AI companies are working on seemingly simple and unimportant advancements in AI when there are much more significant problems to solve. Why would companies trying to create AGI get sidetracked by focusing on potentially already-solved problems? A couple of examples are OpenAI’s voice cloning, Google’s VLOGGER, and Microsoft’s VASA-1.This research, for many, only seems to have use cases for fakes and frauds, but I believe this work signals something much deeper: that we could be near the peak of LLM capabilities. With AGI off the table, it is time to go deep and get very personal.
Peak LLM
Although you can do some cool things with LLMs, and we’ll no doubt see further applicability in other use cases, it’s a far cry from their touted value. You know what I’m talking about, the more impactful than the printing press crowd that still seems to swarm every conversation on the topic. These people talk about 10x, 100x, and even 1000x productivity boosts with LLMs. Compared to bold AGI claims and nonsense productivity levels, a 10% efficiency gain seems inconsequential.
The Wall Street Journal reported that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models last year but brought in only $3 billion in revenue. Ouch! There is reporting on the dismal outlook for generative AI, and some foresee a new Dotcom crash.
People have become more skeptical of claims (as they should), and it seems that many more people are noticing. You can’t believe the demos you see. Many are highly controlled or manufactured altogether. Even the SORA demo that everyone lost their minds over wasn’t what it purported to be.
LLMs are under-delivering on their overhyped promises.
I don’t know what to think about the economic angle. It’s not my area of expertise. I just now know that LLMs are under-delivering on their overhyped promises. Where leads economically, I don’t know.
Many LLMs, including open-source models like Llama 3, are catching up to GPT-4. Even if they don’t have the exact level of performance, they are close, which should tell us something. We may be hitting peak LLM capabilities. This means GPT-5 won’t be AGI or exponentially better than GPT-4. GPT-5 may be better than GPT-4 in some ways, but it is far from a groundbreaking explosion of capabilities.
This lack of performance isn’t going unnoticed at the companies building the technology either. This is why a new approach is needed by companies looking to monetize AI investments further. There’s about to be a shift away from a focus on AGI (although they’ll still talk about it) and ever more capable models to you. That’s right, you.
You’re Next
Just because we may be hitting peak LLM capabilities doesn’t mean things will stop. When you’ve reached the limit of going wide (general), you go deep (personal). This will be a sleight of hand shifting from purely training larger models on more data, creating more capabilities in a broad sense, to deeper, more personal integration.
These companies will make it all about you, not because you are the most important aspect, but because you are where the data is. With systems that are closer to you and more integrated with your data and activities, these companies are hoping to make the products more sticky, with the beneficial exhaust of having access to all your data.
The hope is that an epiphany will sprout from your screen as you find the same tools you previously could take or leave now indispensable. Or maybe even fool yourself with the tech, as the public launch of ChatGPT showed. ChatGPT became a social contagion not because people found it so indispensable but because we are bad at constructing tests and good at filling in the blanks.
But don’t take my word for it. Sam Altman has already started pivoting in this direction. Here’s what he says about the goal of AI: “A super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” That’s pretty creepy. But there’s more.
You can make the tech more sticky by allowing people to personalize and customize in more advanced ways. Technology like voice cloning and animating faces supports this customization aspect. When you can choose whoever you want to be your assistant’s AI avatar, you can anthropomorphize it more. How would you feel if a random stranger used your face and voice as their personal assistant? What about a family member? Is this creepier still? Oddly enough, it serves no purpose for the individual user. It doesn’t make the tool any smarter or more capable. It only exists to manipulate us or allow us to manipulate ourselves.
In the end, you’ll be blamed for LLMs’ lack of success by not allowing them to plunge deeply enough into your life. There’s a saying that if you don’t pay for something, then you are the product. Well, in the age of generative AI, you can pay for something and still be the product. The future’s so bright 😎
Even Deeper
AI companies are doing their best to make this technology unavoidable. We are getting AI whether we want it or not. It’s being baked into the very foundations of our computing systems, and even your humble mouse hasn’t escaped this integration.
How you deactivate these integrations will be anyone’s guess, as the flood of new integrations infects every application imaginable. A security check will be due soon, but security issues aren’t the only problem. As I’ve said, we are creating a brave new world of degraded performance. In an attempt to make hard things easier, we may make easy things hard.
Applications of narrow AI are cool and can be incredibly useful for certain tasks, but does it warrant hooking everything up to LLMs and hoping for the best? I don’t think so, and this approach is fairly misguided, opening us up to unnecessary risks.
Conclusion
We must be much more selective before blindly accepting deep data access and personal integration for these tools. This can start with a few relatively simple questions. What do we hope to gain from this access? How will this provide a measurable benefit? And, most importantly, are the trade-offs worth it? The answers to these questions will be different for everyone.
In many cases, it appears that for the small price of your soul, you can appear and sometimes feel marginally better in some aspects but be measurably worse in others. Does that sound like a good trade?
So, let’s talk about posthumanism for a moment. Yes, posthumanism is actually a thing, and it can sound like a rather odd movement to cheerlead. After all, we as humans aren’t done being human yet. Posthumanism’s adherents are anxiously awaiting the next stage of human evolution, homo technologicus. Yes, it’s also a real thing. I’ve also heard terms like techno-progressivism thrown around. As serious as some of these people may be, their concepts are surrounded by techno-utopian bullshit.
As amazingly silly as this sounds, their views aren’t far off from those of many people these days. Everyone from pure techno-utopians to level-headed “normal” people is kinda thinking the same thing. Let’s slap a bunch of tech inside our bodies and see what happens.
My goal with this post isn’t to address all the narratives or poke even more holes in the logic. I’m writing a book covering this and other topics. For this post, I want to point out a few glaringly obvious issues that should get more attention. The point of this post is that there is no free lunch regarding human augmentation.
Human Augmentation Must Be Universally Good, Right?
I never cease to be shocked at the casual nonchalance of people discussing slapping a bunch of tech inside their bodies, melding our brains with machines. I realize there’s a cool sci-fi aspect to it, but in real life, we have things called consequences. It’s different if there is a cognitive or motor impairment that the technology corrects for, another thing entirely when no impairment exists.
As a security researcher, I can’t bring myself to imagine these systems not being vulnerable to attack and, almost as bad, being used to manipulate us. We like to think of ourselves as the pillars of agency, but in reality, we can be nudged to do all sorts of things, resembling more automatons than humans.
This means that any of these systems would need to have a safe technical baseline. For a basic framework of a safe baseline, see the SPAR categories I’ve outlined previously.
I could address many other technical issues, but for the sake of this conversation, let’s call it a perfect technical implementation. A cognitive symbiosis of mind and machine without any technical issues or glitches. It is a completion of the techno-utopian dream.
Let’s look at why, even in a perfect implementation, there is still no free lunch.
Socrates
To look forward, let’s look back. This is Socrates. Totally not a fake photo, by the way.
Socrates has become a popular punching bag for the AI crowd. Apparently, dunking on a 5th-century BCE philosopher has become some sort of modern-day sick AI burn. So, what sin did Socrates commit that is so egregious to AI leaders today? He was against writing things down.
Socrates worried that writing things down would affect his memory, so he became a punching bag. However, what many don’t realize is that he wasn’t wrong. Writing things down can negatively affect your memory.
We can’t seem to imagine the past without viewing it through the lens of the present. People’s memories were far better in the past than they are today, even pre-social media and the attention apocalypse. It doesn’t take much thought to recognize this. In ancient times, when most people couldn’t read or write, the only place to store knowledge was in their heads. Even asking someone else, you were querying tribal knowledge stored in someone’s head. To his credit, Socrates stumbled onto cognitive offloading and recognized one of the effects.
Ultimately, we are better off for writing, and the benefit of writing things down far outweighs the benefits of a localized, tribal memory, even if individual personal memory is decreased. There are also other interesting effects of writing that Socrates missed, such as exploring thoughts and ideas and some of the memory-reinforcing effects. So, let’s forgive a 5th-century BCE philosopher their faults and focus on what he recognized for a moment: cognitive offloading.
Cognitive Offloading
Cognitive offloading is using physical action to alter the information processing requirements of a task to reduce cognitive demand. We all do this every day. If you’ve ever left yourself a note or set up a meeting in your calendar application, you’ve performed cognitive offloading.
This activity is beneficial since we only have so much cognitive capacity. It’s not just memory but decision-making skills as well. There’s a famous story about President Obama and why he only wore gray or blue suits. He was paring down his decisions.
I know it seems I’m making the posthumanist argument for them, but bear with me. Not all cognitive offloading is the same. In 2016, I heard the evolutionary biologist David Krakauer discussing cognitive artifacts on the Making Sense podcast. This was in the context of discussing complexity and stupidity. He referred to complimentary and competitive cognitive artifacts.
Without being too wordy, complementary cognitive artifacts help you create a model of the problem and are tools that rewire our brains to make problem-solving more efficient. These are things like maps, language, and even the abacus.
Competitive cognitive artifacts don’t augment our ability to reason but instead replace our ability to reason by competing with our own cognitive processes. Classic examples are the calculator or GPS navigation.
The interesting thing here is that complementary cognitive artifacts have imprinting and additional positive effects. For example, being proficient with maps increases spatial awareness. On the other hand, with competitive cognitive artifacts, you are probably worse off when the artifact is removed. For example, using GPS navigation systems degrades spatial awareness, so when it is removed, you are less capable than before.
I’m not arguing that we should destroy all calculators (or GPS navigation systems); I’m only pointing out the impacts of reduced cognitive function. It’s also interesting to consider that AI tools are almost universally competitive cognitive artifacts. We assume, wrongly, that there isn’t a cost to this augmentation. I mean, everything has tradeoffs in life. Technology is no different.
To avoid making this blog post a book a whole book, let’s look at memory.
Memory Storage
Most humans realize that memory is a limitation. Unless we are savants, there are only so many things we store in our heads. But we may be taking the offloading of memory too far. Let’s think about what we are actually doing. As humans, we are transitioning from knowing things to knowing where things are stored. We’ve treated this as universally beneficial without considering side effects.
We are transitioning from knowing things to knowing where things are stored
AI didn’t initiate this trend, but it has accelerated it, especially with systems like ChatGPT, which people use as oracles. This means the information we are retrieving may never have existed in biological memory in the first place and, more interestingly, may not be stored even after we retrieve it. Anyone who’s ever followed a YouTube tutorial on how to do something and, despite performing the task, had to review it again the next time can attest to this.
This brings up some interesting thought experiments. Is someone who doesn’t have any deep knowledge contained in their biological memory smart? After all, information on astrophysics is a search away. Would we say someone proficient at searching Google or prompting a language model is smart? Okay, let’s phrase the question a different way.
Is an average human + Google (or insert favorite AI tool here) smarter than Einstein or Von Neumann? After all, they have access to far more information far more quickly than either of those scientists ever did. Of course, the answer is no. We instinctively know there’s something more to knowledge and intelligence than merely knowing where data is stored or getting a summary from a document.
There’s no doubt that people may feel like Einstein, but that’s a topic for another day.
Human memory is getting worse, no doubt, due to technology. At the veterinary office I visit, I’ve seen people walk out of the exam room to use the restroom, go to the front desk, or go out to their car, and not remember which exam room they came out of. A clear degradation of spatial memory. These weren’t kids on TikTok or people staring down at their phones. People of all ages are represented.
But, not all memory tasks are straight lookup tasks, and memories spontaneously emerge. Sometimes, I bust out laughing when a memory pops into my head. This spontaneous surfacing has benefits, such as the creation of epiphanies and novel concepts creating a satisfaction that can’t be replicated with technology. What happens when this spontaneity disappears? Not only are we worse off, but it leads to more questions.
How do we develop novel ideas and concepts if we don’t have the right knowledge in our biological memory? It’s one thing to have knowledge and some novel concepts in memory and then explore external storage locations for further data. It’s another thing entirely to have no deep knowledge contained in biological memory and expect novelty to emerge because of access to external storage. I know the techno-utopians would say that we’ll build algorithms for this, but it’s a challenging problem and not the same thing and wouldn’t lead to the same results.
Humans + AI = Superhumans?
Human augmentation with AI is being sold as an intellectual get-rich-quick scheme, but the reality is gaining knowledge is hard. Sometimes, it is very hard, and there aren’t any shortcuts today, no matter how many prompts we create or documents we summarize. However, cognitive illusions are easy to come by. We end up fooling ourselves into thinking we know more than we do. Once again, AI didn’t start this trend. It’s merely the accelerant.
There’s a fundamental illusion clouding many people’s perceptions. Just as we can’t seem to view the past without the lens of the present, we can’t envision the future without using the same lens. We tend to assume we’ll keep our same faculties and gain more capabilities, resulting in some sort of win-win situation.
We mistakenly think human augmentation makes us superhuman, but in reality, it probably doesn’t. Despite knowing where information is stored and being able to perform some additional computational tasks, which may give us some superhuman capabilities in a few narrow areas, the reality is it may not make us superhuman overall and probably makes us worse. These additional capabilities will create very real and expanded blind spots and deficiencies. Of course, these won’t be identified until far too late, and everyone will claim not to have seen them coming.
These additional capabilities will create very real and expanded blind spots and deficiencies.
We haven’t even asked ourselves what we hope to get from this symbiosis or augmentation. There is just this generic sense of “enhancement,” but nothing overly specific. It’s one thing if the augmentation addresses some deficiency, such as reduced cognitive or motor function, but what are we addressing when a perfectly functioning human decides to augment themselves?
The reality is that when this symbiosis happens, we will become completely dependent on technology for far more than complex tasks; we will also be dependent upon it to function in our daily lives, even for simple tasks. This is because we will use the resources to offload even more cognitively, regardless of task complexity. Who wins in this scenario? Tech companies? Society? Us? At this point, will the technology still be working for us, or will we be working for the technology? More importantly, at what point do we stop being recognizable as humans?
Parting Thought
I’m not opposed to human augmentation or even being augmented in some way myself. But as an adult who has lived on planet Earth for a bit, I want to understand the tradeoffs. Understanding the costs is essential to determining whether the augmentation is worth it. It seems that in some cases, we may be stiffed with a hefty bill that we never would have agreed to ahead of time.
When it comes to being human, there are certain things we’d like to protect and certain things we are fine giving up. This will be different for each individual, but we all have this. These considerations will have to be part of our future decisions.
Our brains seek to free up resources and limit the amount of work they perform to create brain capacity for other tasks. In short, our brains seek to offload as much as possible. This is something we don’t consciously realize. It’s one of the reasons we prefer getting an answer to solving a problem. Our brains seek the offloading path, whether it’s helpful or not. This evolutionary quirk may have served us well in the past, but with technological advances, it may not serve us well in the future.
The movie Idiocracy is a cult classic that has been quoted more and more over the past few years. Here’s something to think about. It could be that Mike Judge got the future outcome of the movie’s setting right but just got the premise wrong. The only way the world of idiocracy could have come about is if highly capable AI had been in the background, making everything work and, of course, manufacturing Brawndo. Brawndo has electrolytes!