It says much about our current moment that “slop” is Merriam-Webster’s 2025 word of the year. Seems mediocrity is having a moment. In this moment, a delusion has taken hold, centered on AI and the future of work: the AI creativity delusion. For the past couple of years, companies and influencers have told us the world is our slop oyster. If we can imagine it, we can slop it. Ideas and creativity are what matter, and everything else should be automated. We are told that, despite the goal of replacing humans in the workplace, the near-term future of work is humans, still in the loop, doing what they do best, flexing their creativity and letting AI do the rest. Although conceptually sound, this vision falls apart under even the slightest analysis, and it will hit younger generations the hardest.
The AI Creativity Delusion and the Expertise Loophole
Let me address the AI creativity delusion fueling the push for AI-powered coworkers. The premise goes that humans at work will flex their creativity by delegating their AIs to handle other tasks. This way, the human does what they are good at (creativity and ideas), and the AI does what it’s good at (everything else). Many may nod in agreement. However, this premise falls apart under scrutiny. We’ll get to this in a moment, but first, a loophole.
I should note a loophole in AI tool productivity: the expertise loophole. This loophole applies only under specific circumstances, namely when the user already has domain expertise. A user with domain expertise and an understanding of how AI tools fail can provide corrections or put the tool back on track. You need to understand the task, what the correct outputs are, and have enough experience to know when you aren’t getting the right answers.
Expertise also informs when to use AI tools and when not to, because an expert understands what needs to be done and which tasks AI can assist with in delivering the best results. I believe it’s this expertise loophole that blinds people to the real issues we are about to face.
Given the conditions of the expertise loophole, any productivity gains are temporary and apply only to specific job tasks for a limited time. As experienced people leave, less experienced AI-powered coworkers cannot fill the gaps. Mainly because they never have an opportunity to gain experience, the very experience they need to develop creativity in the domain. Secondarily, they outsource the social aspects of their jobs, further isolating themselves from their co-workers. It’s a one-two punch that’s hard to recover from.
A significant flaw in the creativity argument is the misconception that ideas are unique, precious resources that must be protected and fostered. I’ve written about this before, explaining that ideas are common, run-of-the-mill daily occurrences for everyone on the planet. Most ideas are ill-thought-out, half-baked, or just plain stupid. This is our reality.
The Younger Generation as AI-Powered Coworkers
I read an article recently, where a Gen Z founder claims that the younger generation will have an advantage in the workplace because they are growing up fluent in AIand this is helping them stand apart from their older peers. This is the sort of unadulterated, bubble-living bullshit we’ve come to expect these days. Younger generations may be growing up in the flatulence of AI, but that’s something else entirely.
Prolific AI use will ultimately make the younger generation worse off, as they are not set up for success in key areas that align with their value as employees.
I’ll make a claim of my own, and it’s the exact opposite. Prolific AI use will ultimately make the younger generation worse off, as they are not set up for success in key areas that align with their value as employees. This would be true even in a reality where generative AI is near-perfect, and we all know we are far from that.
I’m not attempting to beat up on younger generations. I feel for them, and I want to help. There have been so many ways they haven’t been positioned for success, and now generative AI comes along, putting more nails in the coffin.
These companies want to make you less capable and more dependent.
These companies want to make you less capable and more dependent. If you are dependent, you’ll not only need to use their tools to do your job but also in your day-to-day life. This is not a great option.
Marshall McLuhan said that every augmentation is an amputation. Framing technology in this way opens the door to unforeseen circumstances. In the case of younger generations, the amputation happens before the limb even has a chance to develop. In our case, rather than a physical limb, we are referring to cognitive and social abilities.
When we peer beneath the paint, we find that the very skills needed to make someone valuable at work are precisely those not being developed by overly ambitious AI-powered coworkers. The ones not communicating with their coworkers because their bots attend their meetings and respond to email and text communication.
Revisiting Creativity In Younger Generations
How does someone develop their creativity in the first place without experience? They don’t. Generative AI today can assist people with experience in specific tasks and under certain conditions. It helps with some tasks better than others. This is because someone with experience, especially domain experience, knows how to approach certain solutions, identify the right conditions, and know when the system is malfunctioning. An experienced professional can flex their creativity in this environment. The same cannot be said of the effectiveness of younger generations in the same jobs, despite having access to the same tools. Effectiveness will decline sharply across multiple dimensions.
First, they’ll never develop the social and communication skills necessary to be effective members of an organization. Second, they’ll not gain the experience needed in a given domain. Finally, they’ll never develop the mythical creativity discussed in AI circles due to a lack of expertise and social skills. All of this will be due to the overreliance on generative AI tools.
Let us examine some aspects of AI-powered coworkers. AI-powered coworkers using generative AI tools to attend meetings, take notes, and create action items. They use generative AI tools to communicate with co-workers. Using generative AI tools summarize documentation, books, and countless other sources of knowledge. Using pre-made generative AI tools to perform the job tasks. And the list goes on. This may look productive, but it’s not effective, and that’s a critical difference. There is a subtle deception at work here because AI tools can make people feel more productive and even powerful, even when they are not.
You can’t summarize your way to expertise, you can’t build relationships by outsourcing communication, and you can’t get experience by not doing things. Does this sound like a successful future workforce? Yet this is exactly the vision on offer, and this fuels the AI creativity delusion.
We have a distorted view of creativity, perceiving it as exercised by lone geniuses, but the reality is that most of the creativity needed for success in a business context is exercised in groups through relationships and collaboration. The same relationships and collaboration that aren’t developing in the AI-powered coworker scenario. When people start treating their coworkers like apps, nothing good happens.
The reality is that most of the creativity needed for success in a business context is exercised in groups through relationships and collaboration.
This should all make sense. When has anyone ever admired someone’s creativity when they didn’t actually know anything? It even seems silly to say out loud. To break the rules, you must know where the rules are and what hard constraints exist, and that requires awareness and expertise.
Knowing a tool doesn’t make you an expert; it makes you a tool. But let’s dig a bit deeper with a specific example.
We are told that meeting recording and transcription services free us from taking notes, allowing us to focus on what’s happening. However, this couldn’t be further from the truth. People pay LESS attention when they think a meeting is recorded, especially when they think it’s going to be summarized and bulleted for them. The reality is, taking notes IS paying attention.
When the human brain summarizes, it stores and reinforces. In a learning scenario, when people began taking notes on a computer, they could record everything the teacher said. But, this isn’t the benefit people thought it was. It seems that the act of taking notes with pen and paper forces us to distill what’s being said down to its essence, which is much better for learning and retention. I know it’s hard to believe, but pen and paper are technology too.
Not only that, but if we view this from a social context, there is a courtesy and respect that occurs when people see you taking notes, rather than simply grinning like a dipshit, or when people see you performing background tasks while they are talking. But, oh no. My AI has got this covered.
So you’re going to have a hard time convincing me that people who use LLMs to summarize and respond to their emails and text messages, attend meetings on their behalf, and have no domain experience or developed any workplace social skills are going to have a leg up on the competition.
What’s Old Is New Again
Some things never change. I’ve been on this tools impacting humans beat for quite some time. Back in 2010, I gave a series of talks at security conferences titled “Your Tools Are Killing You.” The premise was that people were completely reliant on their tools and couldn’t perform the task otherwise. If the tool was blind to an issue, the human was likewise blind to it. Newer people to cybersecurity learned the tool, not the task. These conditions ultimately reduced people’s effectiveness at their jobs. Sound familiar? However, the generalized nature of generative AI makes this far worse.
Mediocrity Is The Future Of Work
In September, Harvard Business Review published an article on how AI-generated “workslop” was destroying productivity. In essence, workers using AI tools were passing low-quality content to co-workers, which required them to do more work. This is a sort of productivity shell game where a speedup in one area causes a slowdown in another, pretty far from AGI if you ask me.
When everything is slop, everything is mediocre. We’ve allowed companies to reframe what’s acceptable into mediocrity. We don’t hire employees to give the bare minimums. Nobody responds to a job interview with, “If you hire me, I promise to do just enough.” This is hardly a selling point. When you hire someone to paint your house, you don’t think a mediocre job is acceptable, but if you receive a mediocre report, somehow it’s now fine if it is labeled as “AI-powered.” All the mistakes and inaccuracies are just how things are now.
AIs don’t care about quality; that’s a human’s job. Yet quality is on a steep decline. Some are making low-quality work product acceptable because AI is in the loop. But what happens when people forced to use AI tools in the workplace stop caring about the quality of the work they produce? Nothing good.
Encouraged or forced to use AI for everything, humans will be nothing more than meat-sack automatons providing the embodiment for AI. Most jobs don’t consist of a single task, which is why AIs still need humans in the loop for the current and near future. This vision is hardly motivating or inspirational.
Conclusion
We’ve built our AI temple atop fissures, allowing escaping gases to induce hallucinations, fooling us into thinking we are predicting the future. We remain blinded to the real issues confronting us and their negative impacts. We cling to the AI creativity delusion. Which, ironically, prevents us from using AI tools to their full potential.
Creativity doesn’t come without experience, and most of the creativity needed for success in a business context isn’t the work of lone geniuses, but through collaboration. If we don’t identify and correct these issues soon, we are all but dooming younger generations. We are currently in the f—k around stage, but the find out stage is on the horizon.
In the current landscape of technology hype, technological immodesty underpins everything. Hype trumps everything else, meaning systems can’t be just good at one thing. They need to be good at everything. Systems can’t just be capable. They need to be superintelligent. Once this happens, imagine all of the amazing things they’ll do! We don’t have an intelligence explosion, but we do have an immodesty explosion.
It’s the lack of modesty that compels speculation on topics such as artificial superintelligence before we’ve even achieved AGI. But speculating on artificial superintelligence makes you an ASS, an Artificial Super Speculator. Speaking of asses, people like Ray Kurzweil cloak themselves in immodesty and disregard for reality when they talk about computronium, nanotechnology, or wanting to convert the entire universe into a giant supercomputer. But immodesty has consequences beyond people making asses of themselves.
Technological Immodesty
Technological modesty was introduced in Paul Goodman’s 1969 article Can Technology Be Humane? In the article, Goodman describes technological modesty.
Currently, perhaps the chief moral criterion of a philosophic technology is modesty, having a sense of the whole and not obtruding more than a particular function warrants.
Okay, that’s incredibly academic. In a more common vernacular, what he’s saying is, don’t claim shit can do things it can’t. This is best illustrated with an example. This example comes from the 1992 book Technopoly by Neil Postman.
Norbert Wiener remarked that, if digital computers had been in common use before the atomic bomb was invented, people would have said that the bomb could not have been invented without computers.
I hope this example prompts some reflection. We can’t fathom how ancient humans were able to accomplish things without modern technology. We look at the stones of Puma Punku or the pyramids at Giza and assign a higher likelihood to aliens than to human ingenuity. No offense to Giorgio Tsoukalos. We struggle to imagine a time before our current technologies existed, which is why we often can’t view the past without peering through the eyepiece of the present.
Immodesty isn’t a modern construction, but we’ve supercharged it, adding nitrous oxide and flames shooting out the tailpipes. This is made easier with the arrival of more advanced technology that many don’t understand. And when tech leaders are in public, they can’t help but flaunt it. Here’s a recent example.
So, robots won’t merely eliminate poverty, they’ll make everyone wealthy? Exactly how is that supposed to work? It’s nonsensical performance art, something I’ve written about before. In short, being modest or even realistic, for that matter, doesn’t pay the bills. So, all aboard the hype train.
The number of times I’ve seen and heard the technology and magic quote from Arthur C. Clarke is mind-boggling. People wield this phrase like a weapon, as though it’s evidence for some wildly speculative technology. As a refresher, or for anyone who’s never heard it, here is the quote: “Any sufficiently advanced technology is indistinguishable from magic.” When people quote this phrase, they mean it as proof of the inversion.
When pointing out this inversion, Colin Fraser summed up what people actually mean when they utter Clarke’s phrase: ”Anything indistinguishable from magic can be achieved with sufficiently advanced technology.” This simple inversion is what many of us have been pushing back against for the past few years. This is the pinnacle of technological immodesty, the golden cap atop the polished limestone pyramid.
What’s The Problem?
On the surface, this seems irritating and not problematic. It just seems like the hype boys be hypin’. This is true in some cases. However, there are real-world impacts. Most importantly, it means we won’t address today’s problems since magic fairy dust is coming to solve our woes. You can see this play out when you hear someone like Eric Schmidt saying we should go “all in” on data centers because we aren’t going to meet our climate goals anyway. AI is just going to “figure it out.” Oh yes, More data centers, please!!! Notice the hype of his referring to AI as “alien intelligence?”
The entire gambit runs from the benign to the absurd and everything in between. For example, when the technology to eradicate mosquitoes surfaces, people in the e/acc community are all for it.
That’s right. This dunce cap doesn’t understand the ethical concerns for wiping out an entire species. Even the discussion on the post is stupid. The ethical concerns aren’t for the “suffering” of the mosquitoes. The problem centers on the consequences of eliminating an entire species from the planet. They view an entire species as an affliction like Polio or ALS, which makes the thought experiment of wiping them out a simple binary. You know, the little things. I loathe the e/acc community for making me defend mosquitoes.
In another example, Elon Musk wants to block out the sun with satellites. Yup, I’m sure nothing bad would come of that. I believe it was Dienekes who, when informed that satellites would block out the sun, responded, “Then we’ll TikTok in the shade!”
Dario Amodei claims technology (his technology) will cure cancer and double our lifespan in just a few years. At some point, technology will allow us to eradicate cancer and possibly double our lifespan. Side note: Doubling our lifespan isn’t exactly the benefit it seems to be if we can’t address cognitive decline. Statements like these exploit our intuitions about technological capabilities. By adding the word “soon,” he creates an air of believability, hyping his own technology in hopes of attracting further investment. Let’s not forget that none of his predictions have come true. This is like spraying a pile of manure with perfume and claiming that it never smelled.
Not to be outdone in speculative bullshit, Ray Kurzweil wants to pave over the entire universe, converting its atoms into computronium to build even larger supercomputers. Cool huh? This is nothing more than speculative tech bro porn. This is a lack of modesty on a cosmic scale.
Even if this were possible, what gives us the right to wipe out entire planets? You know all of those sci-fi superhero movies where the heroes of Earth need to defend against the massive dark force threatening to destroy us? Kurzweil wants us to create that dark force to nom nom entire planets and belch out compute. Sorry, you’re going to die, folks, but think of all the compute we’ll have!
There are countless examples like these that go far beyond mere product overhyping. So, you might wonder why people say shit like this. Well, it’s for a few reasons. Either their paycheck depends on it, they enjoy a public performance, or they want to sound smarter than everyone else. Take your pick. There’s an old saying: never trust a prediction when a person’s paycheck depends on it being correct. Sage advice.
One of the biggest problems that arises from a lack of technological modesty is the devaluation of both humans and nature. What is the worth of a single human or even an entire planet in the face of an intergalactic nom nom machine? Even on a smaller scale, we see people who want to replace other people with technology. AI coworkers, AI friends, AI art, and the list goes on and on. In another inversion, instead of us using technology, technology uses us.
We allow people wielding tech like a magic wand to trivialize and devalue everything that makes humanity unique. And for what? The benefits are supposedly implied, but the details are never specified.
We Are Bad At Imagining The Future
Take a moment to reflect on the fact that most of the AI predictions you’ve heard over the past few years have not only been wrong, but completely absurd. Nearly all of them. Absurd predictions fuel hype, which the press picks up, which in turn fuels more absurd predictions. We don’t reward people for being right; we reward them for being bold.
Humans are also bad at imagining the future. By most accounts, we should have flying cars and hoverboards by now. But risks of the past seem quaint by today’s standards. In a complex world, it’s hard to imagine how situations will change and adapt, especially when technology solves problems.
Harry Harrison’s 1966 novel about overpopulation and scarce resources, called Make Room! Make Room!, contains a shocking statement at the end of the book intended to invoke fear.
CENSUS SAYS THE UNITED STATES HAD BIGGEST YEAR EVER END OF CENTURY
344 MILLION CITIZENS IN THESE GREAT UNITED STATES
HAPPY NEW CENTURY!
HAPPY NEW YEAR!
This statement may have invoked fear in 1966, but today it looks quaint. We are roughly there population-wise already, and people eat soybeans and lentils because they want to, not because they have to. We aren’t in the Make Room! Make Room! scenario because of advances in technology, agriculture, and trade.
Yeah, yeah. I know, spoiler alert. The book came out in 1966. If you haven’t read it yet, then I don’t know what to tell you.
Not only is overpopulation not a problem, but some are warning that population collapse is the problem, and we need to increase the population, something Elon Musk has taken it upon himself to address personally. The real world is a complex place, filled with unforeseen problems and solutions. It’s just not possible to speculate very far into the future, yet that’s what we are all being prodded to do.
My site focuses on technology risks, so this is what I highlight. The benefits of technology are far-reaching, and this is important to keep in mind as we identify potential risks. It’s because of technology that we can live longer and that the planet can support its population. Even technologies that carry risks can be evaluated in terms of trade-offs. The problems I call attention to are people wielding speculative technologies like a magic wand, casting imaginings far and wide.
When it comes to problems caused by technology, we are told not to fret because a solution is on the way. We are told that any problem created by technology can be solved by applying even more technology. But this technology layering distracts from the root cause of the underlying issues, and of course, creates even more issues. The original problem lies at the center of an onion wrapped in petals, creating more problems. Or in some cases, there was no problem at all.
In other cases, solving technology problems with more technology may very well be true. However, there are many situations in which this doesn’t make sense. For example, we aren’t going to solve the loneliness epidemic with AI friends. Many people have a technology-shaped hole at the center of their being that won’t be solved with more technology.
Injecting Reality Into Technological Immodesty
Unfortunately, there isn’t much we, as individuals, can do on a large scale to address this, except roll our eyes. We must remain robust against the influence of speculative bullshit. Some of us need to stay grounded in reality and able to identify future risks without being thrashed about by the tornado of irrationality and hype. Hype creates fear, and fear can be weaponized.
On a smaller scale, in our conversations, when someone brings up something like nanobots, computronium, wiping out species, the need for humans to leave Earth for our survival, how AI or robots will end poverty and create a utopia, or any of the countless other examples of speculative bullshit, ask these people two simple questions. Why do you think that? How will that work? Make people explain their position. You’ll typically find they either have no clue or are talking about magic instead of technology.
When you inject a bit of reality into the conversation, and someone claims, “But you aren’t Ray Kurzweil!” Simply respond with, “Thankfully!” Modesty is born from the recognition of reality, and reality has been in short supply lately and will be for quite some time.
Technology is all about tradeoffs. Ask people what they believe the tradeoffs are. It’s incredibly rare to get something for nothing, or even cheaply, for that matter. A vast majority of the time, people haven’t considered any tradeoffs or are blind to their existence.
Conclusion
Technology has and will continue to improve human lives. It’s because technology has successes that it’s so easy for people to speculate about future technology. I’m hopeful that technological advances allow us to find cures for afflictions like cancer and Alzheimer’s and overall make people’s lives better. These would be major accomplishments. Also, indoor plumbing kinda rules.
I’m not making the case for a regression to some previous era. What I’m saying is, we shouldn’t let people talk about technology as though it’s magic. This environment not only causes massive harm but also allows charlatans and hucksters to run rampant. Unfortunately, we have a lot of work ahead.
Humans are incredibly creative, especially when it comes to wasting time. Throughout the ages, we’ve explored time-wasting with zeal, inventing new methods and distractions to pass the time and avoid contemplation. In the current age, that tool is generative AI. Generative AI has transformed not into an indispensable productivity tool, but into a babysitter. While AI companies push hard to convince enterprises that their tools are a great fit for business use cases, for many people, it has become a way to fill time exploring AI slop in its many forms. Welcome to the world of sloputainment.
Sloputainment: Next-Generation Time Wasting With AI Slop
No matter your position on generative AI’s usefulness for day-to-day productivity, it’s undeniable that generative AI truly excels at producing slop. The risks and impacts of slop outputs are aligned with the context in which they are generated. In business or safety-critical use cases, slop can have a significant negative impact, causing damage and endangering people. However, people messing around on the internet or playing with these tools on their own typically present a lower risk. There are exceptions, for example, using generative AI to bully or harass others, but, largely, that’s not what most people use them for.
In a previous post, I commented that people secretly like slop and that it is here to stay. Slop is a way for people to entertain themselves, pass the time, and generate content. Something we all witness daily.
Ethan Mollick, one of AI’s biggest cheerleaders and proponents of its productivity, spends much of his time playing around with image and video generation. Or at least, this is what his social media feed suggests.
The AI music generation app Suno reported annual recurring revenue of $150 million. You don’t think all these people are getting gold records out of this, do you?
And, before someone mentions, “But, did you see the number 1 country album was AI,” let me stop you there. You should watch this video instead of listening to that garbage.
Not only does the song suck, but apparently, someone only had to spend 3k for this publicity. Pretty good investment.
Using AI to make music makes someone no more of a musician than a child wearing a firefighter’s helmet makes them a firefighter. Nobody is going to see a child with a firefighter’s helmet on and send them into a burning building, remarking that’s what they signed up for.
Using AI to make music makes someone no more of a musician than a child wearing a firefighter’s helmet makes them a firefighter.
People are even using AI for gender reveals in totally normal ways, such as smashing into the Twin Towers or taking down the Hindenburg. You know, perfectly normal, totally sane shit.
OpenAI is even making the shift towards porn. This isn’t the move a company makes when they are on the cusp of AGI. It’s a move you make when you are hemorrhaging money and desperate for any avenue whatsoever to revenue.
Sloputainment is popular because it checks critical boxes. It’s a form of entertainment for the person creating it. A form of entertainment that requires no talent or effort, resulting in extremely low friction. Also, it creates content that the person can share on social media. Many constantly search for things daily to transform into content, fearing that a single day without posting on social media will make them irrelevant. The fact that sloputainment checks both the entertainment and content boxes all but guarantees it’s here to stay.
Sloputainment and the Illusion of Productivity
But directly using AI to create slop images and video is only one form of sloputainment. There is another form that masks itself as productivity or hustling. In a new trend, people boast publicly about how they’ve abandoned things like video games in favor of building software. Largely inconsequential software projects for themselves, but the task is transformed into content. And their perspective based on isolated projects is transformed into an entire worldview.
Why it’s bad to let AI creep into every moment and aspect of your life? I don’t know, why is that bad? With some people, there’s no delineation or line they won’t let AI cross. Don’t get me wrong, I’m sure it’s fun for people playing around with this stuff. I mean, building things with Legos can be fun, too. But nobody is confused when they build something with Legos that they are building the next big thing. That something really could come out of it, and that they may be onto the next multi-million-dollar app. This isn’t AI psychosis, but it is delusional.
There is a distinction that needs to be made here between the forms of sloputainment. The only way to get good at something is to practice it. For example, you can’t get good at building AI agents without building agents, which is why vibe coding doesn’t teach you much about real-world development. Even the person who coined the term vibe coding understands this. This activity is different than people just posting slop images to social media.
The issue is the undercurrent of hustle bro culture, which gives the impression that if you aren’t hustling, you’ll be left behind. In many ways, this public performance is meant to show that their personal activities are better than everyone else’s. It’s self-flagellation in the era of generative AI. People playing around with things to learn and understand is good. People replacing other activities in their personal lives with the illusion of productivity is bad.
In many ways, these activities resemble people playing around with their friends, similar to garage bands having fun jamming on nothing in particular. However, secretly hoping in the background that they get their big break and a gold or platinum album. Only, instead of a gold or platinum album, this is their award. It’s fitting that in the generative AI age, you get awards for spending money instead of making it.
It’s fitting that in the generative AI age, you get awards for spending money instead of making it.
However, instead of representing people enjoying the output of their creative pursuits, it’s just representative of a system chewing up tokens. The dystopia says, “Nom nom.”
Sloputainment Is Content
AI is not only making everything entertainment, but as a byproduct, it’s creating content. This is the knockout punch for our modern information junk food diet. It’s the ice cream piled high on a cake, topped with potato chips, chocolate, caramel, and a pound of M&Ms for breakfast, lunch, and dinner.
Social media has rewired our brains to see everything and everyone as content. The whole world is our content oyster, and nothing escapes the content lens. It’s why you see things like idiots defacing coral reefs with graffiti. If only this were AI slop. This is why some people have no problem using AI to harass other people and organizations. As I mentioned back in 2020, harassment is the true legacy of technologies like deepfakes. This does seem to be bearing out in the age of generative AI.
I think what bothers me most about this hustle bro nonsense is that it gives the impression of devaluing so much of what is truly valuable. I mean, quiet contemplation looks like time wasting to morons in motion. Which is Newton’s Fourth Law of motion, morons in motion, stay in motion. It’s also why so many people get so many things so wrong. They are so busy hustling that they never stop to contemplate, you know, to get things right.
It takes me forever, by generative AI standards, to write an article like this. Writing is thinking, and in each of these articles, I’m working my way through the topics like everyone else while attempting to give them the level of contemplation they deserve. If this site were about content instead of contemplation, I’d be blasting out AI-slop articles with clickbait headlines, trying to game SEO. You know, like over 99% of the internet. Instead, I’m content to labor away in obscurity. If writing is thinking, then generating with AI is the lack of thinking. This gets lost in the tidal wave of content slop.
If writing is thinking, then generating with AI is the lack of thinking.
Sloputainment is Entertainment
Somewhere along the way, we conned ourselves into thinking everything needs to be entertainment. Things now have to be converted into entertainment to be valuable. Even something as mundane as our own data can now be transformed into entertainment. I mean, Google created NotebookLM, allowing the generation of a podcast from our data. Because learning from reading, analyzing, and engaging with ideas is boring, and only losers would learn that way. We now have to be entertained to learn.
There’s a problem, though. When something is viewed as entertainment, it appears to have a more truthful weight or feeling. To use modern vernacular, you could even say that data-as-entertainment emits truth vibes. Where we may question an AI overview or summary, we are less likely to question the same data in the form of a podcast that sounds like humans or a video presentation with a human voice, but it’s the same data with the same issues. Especially since much of this data doesn’t conflict with our biases, otherwise, we wouldn’t consider it entertainment.
Where we may question an AI overview or summary, we are less likely to question the same data in the form of a podcast that sounds like humans or a video presentation with a human voice.
Take Graham Hancock’s Ancient Apocalypse docuseries on Netflix. Presented on a platform like Netflix, stunning locations, cinematic shots, but all 100% pure, unadulterated bullshit. Yet presenting the content this way lends it weight and credibility. Of course, it doesn’t help that the docuseries doesn’t feature any actual experts either.
Graham Hancock is a fraud with no expertise and a peddler of bullshit. Yet, he was given a platform to spread his nonsense to a wider audience. AI-as-entertainment platforms risk doing the same for every type of content and data. We risk embedding falsehoods and misperceptions deep in our brains due to formatting issues.
We risk embedding falsehoods and misperceptions deep in our brains due to formatting issues.
Text presented in paragraph form with easily clickable links is a much better, easier way to verify content than an audio podcast you listen to in the car or while doing something else. The same can be said of video presentations or cartoon talking heads. But paragraphs have higher friction than podcasts.
Admittedly, NotebookLM is neat technology, but the disconnect lies in failing to distinguish between a technology being cool and the value it truly provides, and in a far greater disconnect about what the technology does to us. You know, the tradeoffs. So much of our current moment consists of waving hands, directing our attention to how cool a technology is without consideration of use cases or impacts.
I’m sure this is just a continuation of a trend that Neil Postman identified as starting with television. But AI supercharges it. Imagine, instead of a podcast, next up will be video. As a matter of fact, while I was writing this post, Google updated NotebookLM to include generative video overviews, taking data entertainment to the next level. Using video, we can learn about someone like Neil Postman not by engaging with his work but through cartoon summaries that may or may not capture the important aspects. An approach Postman would detest, but no doubt see coming. There is a good chance these summaries miss important details as they focus on what’s most interesting, shocking, or exciting. We are about to “true crime” everything.
I should note that the impacts aren’t the same for all types of information. For some things, simple summaries are fine. However, the difficulty we face is understanding where the true delineation point lies, and, of course, the tendency to overestimate our knowledge based on trivial information.
There was an early attempt to use AI to cut together movie trailers. The AI identified the most “exciting” aspects of the movie, explosions, car chases, etc., but it didn’t connect with people or follow a story. It was just a bunch of cutscenes with no through line. Now, everything is cutscenes, fueling our entertainment addiction. There’s a lot of history that’s boring but important, and we risk paving over history as we reengineer it into entertainment.
We risk paving over history as we reengineer it into entertainment.
If you are lucky, there’s a memory of an entertaining school teacher who made classes more tolerable, and you may have even learned more because of it. We also have memories of learning things from documentaries. These memories may lead us to think that entertainment is the best way to engage with a topic and learn. However, there’s a distinction between entertaining and entertainment.
Entertaining is a method that still requires friction to get to a goal. It’s just that the friction becomes more tolerable due to heightened interest and engagement. For example, the entertaining school teacher still required the same reading and homework assignments. Entertainment is content that promises a complete reduction of friction. No need to read a book or engage with the content, watch this AI-generated short instead. Remember, knowledge and understanding aren’t generated from summaries or bullet points.
You may be thinking that there’s not a lot of harm in these activities, and for the most part, this is correct. How each person wastes their time is up to them. Fair enough, to each their own. However, consider that when we use AI to harass other people, we cause them harm. When we use AI to create entertainment masquerading as something else, we harm ourselves. This is what I take issue with: the tradeoffs that nobody considers and the false perception that hustling this way is the only way to make progress in the modern age.
There is no doubt that people use generative AI daily for productivity tasks. Great. And if people truly are using these activities to learn, fair enough. However, things like vibe coding and AI summaries don’t teach valuable lessons. Quite often, the lessons come afterward, when you get owned or try to apply your newfound summarized knowledge.
Conclusion
Sloputainment leaves so many things undiscovered, about ourselves, others, and the world. With every prompt, it steals from us, taking our time, understanding, and even our sense of who we are. We get sucked into the content vortex spinning chaotically around a hollow center with no ability to center ourselves, and it takes effort to break free.
Wasting time isn’t a modern concept. However, we have supercharged it with AI. In On The Shortness of Life, Seneca explains that it isn’t that life is too short, but the fact that we waste so much of the time we have. Seems some things haven’t changed since the first century AD. But we’ll close with some words from the great American philosopher, Sebastien Bach, who posed this concept in a series of questions:
Is it all just wasted time? Can you look at yourself, When you think of what you left behind? Is it all just wasted time? Can you live with yourself, When you think of what you've left behind?
AI security is a hot topic in the world of cybersecurity. If you don’t believe me, a brief glance at LinkedIn uncovers that everyone is an AI security expert now. This is why we end up with overly complex and sometimes nonsensical recommendations regarding the topic. But in the bustling market of thought leadership and job updates, we’ve seemed to have lost the plot. In most cases, it’s not AI security at all, but something else.
Misnomer of AI Security: It’s Security From AI
I recently delivered the keynote at the c0c0n cybersecurity and hacking conference in India. It was truly an amazing experience. One of my takeaways was encouraging a shift in perspective on the term “AI Security,” highlighting how we often approach this topic from the wrong angle.
The term “AI Security” has become a misnomer in the age of generative AI. In most cases, we really mean securing the application or use case from the effects of adding AI. This makes sense because adding AI to a previously robust application makes it vulnerable.
In most cases, we really mean securing the application or use case from the effects of adding AI.
For most AI-powered applications, the AI component isn’t the end target, but a manipulation or entry point. This is especially true for things like agents. An attacker manipulates the AI component to achieve a goal, such as accessing sensitive data or triggering unintended outcomes. Consider this like social engineering a human as part of an attack. The human isn’t the end goal for the attacker. The goal is to get the human to act on the attacker’s behalf. Thinking this way transforms the AI feature into an actor in the environment rather than a traditional software component.
There are certainly exceptions, such as with products like ChatGPT, where guardrails prevent the return of certain types of content that an attacker may want to access. An attacker may seek to bypass these guardrails to return that content, making the model implementation itself the target. Alternatively, in another scenario, an attacker may want to poison the model to affect its outcomes or other applications that implement the poisoned model. Conditions like these exist, but are dwarfed in scale by the security from AI scenarios.
Once we start thinking this way, it makes a lot of sense. We shift to the mindset of protecting the application rather than focusing on the AI component.
AI Increases Attack Surface
Another thing to consider is that adding AI to an application increases the attack surface. Increase in attack surface manifests in two ways: first, functionally through the inclusion of the AI component itself. The AI component creates a manipulation and potential access point that an attacker can utilize to gain further access or create downstream negative impacts.
Second, current trendy AI approaches encourage poor security practices. Consider practices like combining data, such as integrating sensitive, non-sensitive, internal, and external data to create context for generative AI. This creates a new high-value target and is a poor practice that we’ve known from decades of information security guidance.
Also, we have trends where developers take user input, request code at runtime, and slap it into something like a Python exec(). This not only creates conditions ripe for remote code execution but also a trend where developers don’t know what code will execute at runtime.
Vulnerabilities caused by applying AI to applications don’t care whether we are an attacker or a defender. They affect applications equally. This runs from the AI-powered travel agent to our new fancy AI-powered SOC. Diamonds are forever, and AI vulns are for everyone.
It’s Simpler Than It Seems
Here’s a secret. In the real world, most AI security is just application and product security. AI models and functionality do nothing on their own. They must be put in an application and utilized in a use case, where risks materialize. It’s not like AI came along and suddenly made things like access control and isolation irrelevant. Instead, controls like these became more important than ever, providing critical control over unintended consequences. Oddly enough, we seem to relearn this lesson with every new emerging technology.
In the real world, most AI security is just application and product security.
The downside is that without these programs in place, organizations will accelerate vulnerabilities into production. Not only will they increase their vulnerabilities, but they’ll be less able to address them properly when vulnerabilities are identified. Trust me, this isn’t the increase in velocity we’re looking for.
I’ve been disappointed at much of the AI security guidance, which seems to disregard things like risk and likelihood of attack in favor of overly complex steps and unrealistic guidance. We security professionals aren’t doing ourselves any favors with this stuff. We should be working to simplify, but instead, we are making things more complex.
It can seem counterintuitive to assume that something a developer purposefully implements into an application is a threat, but that’s exactly what we need to do. When designing applications, we need to consider the AI components as potential malicious actors or, at the very least, error-prone actors. Thinking this way shifts the perspective for defending applications towards architectural controls and mitigations rather than relying on detecting and preventing specific attacks. So much focus right now is on detection and prevention of prompt injection, and it isn’t getting us anywhere, and apps are still getting owned.
I’m not saying detection and prevention don’t play a role in the security strategy. I’m saying they shouldn’t be relied upon. We make different design choices when we assume our application can be compromised or can malfunction. There are also conversations about whether security vulnerabilities in AI applications are features or bugs, allowing them to persist in systems. While the battle rages on, applications remain vulnerable. We need to protect ourselves.
There is no silver bullet, and even doing the right things sometimes isn’t enough to avoid negative impacts. But if we want to deploy generative AI-based applications as securely as possible, then we must defend them as though they can be exploited. We can dance like nobody is watching, but people will discover our vulnerabilities. Defend accordingly.
The past couple of years have been fueled entirely by vibes. Awash with nonsensical predictions and messianic claims that AI has come to deliver us from our tortured existence. Starting shortly after the launch of ChatGPT, internet prophets have claimed that we are merely six months away from major impacts and accompanying unemployment. GPT-5 was going to be AGI, all jobs would be lost, and nothing for humans to do except sit around and post slop to social media. This nonsense litters the digital landscape, and instead of shaming the litterers, we migrate to a new spot with complete amnesia and let the littering continue.
Pushing back against the hype has been a lonely position for the past few years. Thankfully, it’s not so lonely anymore, as people build resilience to AI hype and bullshit. Still, the damage is already done in many cases, and hypesters continue to hype. It’s also not uncommon for people to be consumed by sunk costs or oblivious to simple solutions. So, the dumpster fire rodeo continues.
Security and Generative AI Excitement
Anyone in the security game for a while knows the old business vs security battle. When security risks conflict with a company’s revenue-generating (or about to be revenue-generating) products, security will almost always lose. Companies will deploy products even with existing security issues if they feel the benefits (like profits) outweigh the risks. Fair enough, this is known to us, but there’s something new now.
What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve. This is new because it involves all risk with potentially no reward. These companies are hoping that users define a use case for them, creating solutions in search of problems.
What we’ve learned over the past couple of years is that companies will often plunge vulnerable and error-prone software deep into systems without even having a clear use case or a specific problem to solve.
I’m not referring to the usage of tools like ChatGPT, Claude, or any of the countless other chatbot services here. What I’m referring to is the deep integration of these tools into critical components of the operating system, web browser, or cloud environments. I’m thinking of tools like Microsoft’s Recall, OpenAI’s Operator, Claude Computer Use, Perplexity’s Comet browser, and a host of other similar tools. Of course, this also extends to critical components in software that companies develop and deploy.
At this point, you may be wondering why companies choose to expose themselves and their users to so much risk. The answer is quite simple, because they can. Ultimately, these tools are burnouts for investors. These tools don’t need to solve any specific problem, and their deep integration is used to demonstrate “progress” to investors.
I’ve written before about the point when the capabilities of a technology can’t go wide, it goes deep. Well, this is about as deep as it gets. These tools expose an unprecedented attack surface and often violate security models that are designed to keep systems and users safe. I know what you are thinking, what do you mean, these tools don’t have a use case? You can use them for… and also ah…
The Vacation Agent???
The killer use case that’s been proposed for these systems and parroted over and over is the vacation agent. A use case that could only be devised by an alien from a faraway planet who doesn’t understand the concept of what a vacation is. As the concept goes, these agents will learn about you from your activity and preferences. When it’s time to take a vacation, the agent will automatically find locations you might like, activities you may enjoy, suitable transportation, and appropriate days, and shop for the best deals. Based on this information, it automatically books this vacation for you. Who wouldn’t want that? Well, other than absolutely everyone.
What this alien species misses is the obvious fact that researching locations and activities is part of the fun of a vacation! Vacations are a precious resource for most people, and planning activities is part of the fun of looking forward to a vacation. Even the non-vacation aspect of searching for the cheapest flight is far from a tedious activity, thanks to the numerous online tools dedicated to this task. Most people don’t want to one-shot a vacation when the activity removes value, and the potential for issues increases drastically.
But, I Needed NFTs Too
Despite this lack of obvious use cases, people continue to tell me that I need these deeply integrated tools connected to all my stuff and that they are essential to my future. Well, people also told me I needed NFTs, too. I was told NFTs were the future of art, and I’d better get on board or be left behind, living in the past, enjoying physical art like a loser. But NFTs were never about art, or even value. They were a form of in-group signaling. When I asked NFT collectors what value they got from them, they clearly stated it wasn’t about art. They’d tell me how they used their NFT ownership as an invitation to private parties at conferences and such. So, fair enough, there was some utility there.
In the end, NFTs are safer than AI because they don’t really do anything other than make us look stupid. Generative AI deployed deeply throughout our systems can expose us to far more than ridicule, opening us up to attack, severe privacy violations, and a host of other compromises.
In a way, this public expression of look at me, I use AI for everything has become a new form of in-group signaling, but I don’t think this is the flex they think it is. In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.
In a way, these people believe this is an expression of preparation for the future, but it could very well be the opposite. The increase in cognitive offloading and the manufactured dependence is precisely what makes them vulnerable to the future.
Advice Over Reality
Social media is awash with countless people who continue to dispense advice, telling others that if you don’t deploy wonky, error-prone, and highly manipulable software deeply throughout your business, then they are going to be left behind. Strange advice since the reality is that most organizations aren’t reaping benefits from generative AI.
Here’s something to consider. Many of the people doling out this advice haven’t actually done the thing they are talking about or have any particular insight into the trend or problems to be solved. But it doesn’t end with business advice. This trend also extends to AI standards and recommendations, which are often developed at least in part by individuals with little or no experience in the topic. This results in overcomplicated guidance and recommendations that aren’t applicable in the real world.
The reason a majority of generative AI projects fail is due to several factors. Failing to select an appropriate use case, overlooking complexity and edge cases, disregarding costs, ignoring manipulation risks, holding unrealistic expectations, and a host of other issues are key drivers of project failure. Far too many organizations expect generative AI to act like AGI and allow them to shed human resources, but this isn’t a reality today.
LLMs have their use cases, and these use cases increase if the cost of failure is low. So, the lower the risk, the larger the number of use cases. Pretty logical. Like most technology, the value from generative AI comes from selective use, not blanket use. Not every problem is best solved non-deterministically.
Another thing I find surprising is that a vast majority of generative AI projects are never benchmarked against other approaches. Other approaches may be better suited to the task, more explainable, and far more performant. If I had to take a guess, I would guess that this number is close to 0.
Generative AI and The Dumpster Fire Rodeo
Despite the shift in attitude toward generative AI and the obvious evidence of its limitations, we still have instances of companies forcing their employees to use generative AI due to a preconceived notion of a productivity explosion. Once again, ChatGPT isn’t AGI. This do everything with generative AI approach extends beyond regular users to developers, and it is here that negative impacts increase.
I’ve referred to the current push to make every application generative AI-powered as the Dumpster Fire Rodeo. Companies are rapidly churning out vulnerable AI-powered applications. Relatively rare vulnerabilities, such as remote code execution, are increasingly common. Applications can regularly be talked into taking actions the developer didn’t intend, and users can manipulate their way into elevated privileges and gain access to sensitive data they shouldn’t have access to. Hence, the dumpster fire analogy. Of course, this also extends to the fact that application performance can worsen with the application of generative AI.
The generalized nature of generative AI means that the same system making critical decisions inside of your application is the same one that gives you recipes in the style of Shakespeare. There is a nearly unlimited number of undocumented protocols that an attacker can use to manipulate applications implementing generative AI, and these are often not taken into consideration when building and deploying the application. The dumpster fire continues. Yippee Ki-Yay.
Conclusion
Despite the obvious downsides, the dumpster fire rodeo is far from over. There’s too much money riding on it. The reckless nature with which people deploy generative AI deep into systems continues. Rather than identifying an actual problem and applying generative AI to an appropriate use case, companies choose to marinade everything in it, hoping that a problem emerges. This is far from a winning strategy. Companies should be mindful of the risks and choose the right use cases to ensure success.
Weaved through the fabric of the hustle-bro culture, threaded with the drivel of influencers, lies one of the biggest cons of our current age. This is the false perception that everything we do has to be for some financial gain or public attention. With everything in life revolving around social currency or actual currency, removing friction enables us to reach value quickly. But don’t fret. The slop dealer is here with a plan to deliver us salvation, telling us that ideas are what’s important and everything else is pointless friction, needing to be optimized to reach full potential. Like so many things in our current moment, if only this were true.
Despite the decline in excitement for AI and the potential resulting market corrections, unfortunately, slop is here to stay. Although people outwardly complain about it, they are secretly glad it’s here. Being unique, thoughtful, and creative is hard. Slop allows people to swaddle themselves in a false comfort devoid of any real creativity. So, damn the torpedoes, full slop ahead.
Slop, Enshittification, and Brain Rot
Slop, enshittification, and brain rot are terms burned into our current lexicon. Although each term has a different definition, one referring to outputs, one referring to platforms, and one referring to what it does to us. When I use the generalized term slop here, I mean a mixture of all three together, a sort of thick, rancid mixture reminiscent of manure and White Zinfandel. This is because the combined term aligns better with the content and its overall impact.
The Slop Dealer
The slop dealer tells us everything is a hustle, and we need to get on board to reduce friction everywhere we can to accelerate value or be left in the dust by others using AI. They don’t talk of reasonable AI usage or prescriptions for specific tasks; it’s all or nothing. We need to surrender to the higher power. The slop dealer embodies everything that tech bro culture stands for. It’s the current equivalent of a get-rich-quick scheme, only instead of taking our money, they are stealing our attention and our satisfaction. Although sometimes they take our money too.
The slop dealer swindles us by telling us what we want to hear, that hard things are a thing of the past, and all we need is an idea. After all, everybody has ideas. These are the influencers, wanna-be influencers, and other AI useful idiots vomiting nonsense on social media. They aren’t peddling secret knowledge; they are peddling bullshit.
This pandering is done so we’ll follow them, subscribe to their newsletters, or buy their nonsense. But one of the biggest lies of all is the false impression that the value of creative pursuits lies in the end result.
Most of these people have no shame and not only believe in Dead Internet Theory, but also actively work to make it a reality. If you are wondering why people en masse find tech bro culture abhorrent, look no further than this stunning piece of work.
To quote this guy directly, “How I personally feel? I have no idea. The internet in my mind is already dead. I am the problem, right?” I get the impression this isn’t the first time he’s realized he’s the problem. Unfortunately, acknowledgement of this isn’t enough to change behavior.
The Slop Architect
The slop architect works not in traditional mediums but in ideas. To the slop architect, execution, skills, and experience are secondary, bowing at the pedestal of ideas. The fact is, most ideas are ill-thought-out, half-baked, or just plain fucking stupid. The slop architect doesn’t care because they don’t carry ideas to term; they birth them instantly, shoving them out into the world to fend for themselves as they move on to something else. I mean, the vape Tamagotchi was someone’s idea, too. Yes, please! Let’s accelerate these!
Ideas aren’t unique, precious resources, but common, run-of-the-mill, everyday occurrences for everyone on the planet. The slop architecture amplifies the fallacy that ideas are sacred and pushes the idea that if more ideas were executed, the world would be a better place. If only we had more apps, more books, more music, and the list goes on and on. This connects with people because everyone has ideas.
What most people who have thought about it for more than two seconds realize is that we don’t get to the value of an idea purely by having it. Ideas in isolation are senseless ramblings of the brain. Ideas forged and refined in the fire of execution, experience, and reflection are invaluable and fulfilling. Our ideas are never challenged in the slop architecture, leading us to new discoveries and paths, but are chucked out into the world and quickly discarded, like forgotten attempts at memes that nobody finds funny.
The AI Slop Architecture
The slop architect’s vision is implemented with the slop architecture, which presents itself as a process or application. The slop architecture is pitched as the way forward, the next-generation architecture fueling the future of humanity’s pursuits. But a simple scratch of the surface paint is all it takes to expose the entire thing as an empty shell.
When you see people pitching these types of things, it uncovers people who don’t understand creativity and certainly don’t understand where value exists in a process. Everything is a hustle for the sake of hustling. This person is hardly the only one.
Back in 2023, I jokingly created my own version of the slop architecture, which I referred to as IPIP, long before the AI influencers made it a reality.
This article was complete with a description of what would come to be known as vibe coding. “The hype has led to a new form of software development that appears to be more like casting a spell than developing software.”
Taking the slop architecture to heart, it’s not hard to find implementations already running. Books, slides, music, applications, nothing is off limits. Everything is fair game in the slop era.
Ah, Magic bookifier. Yeah, let me get on that. Any time someone puts magic in reference to AI, it’s bullshit.
People also fantasize about what advanced AI is or will be able to do. Take this use case for AGI, for example.
It reminds me of the Luke Skywalker meme where he’s handed the most powerful weapon in the galaxy and immediately points it at his face. This is informative for a couple of reasons. Movies can’t be exactly like the books for reasons other than length. They are different media with different tools. But look at the response. Human work isn’t worth protecting in the future. This is a far more common perspective than many think.
Even apps. It’s slop from all angles. So, if these tools already exist, why aren’t we all kicking back, receiving our profits? Maybe there’s something more to this than having an idea.
But we can’t just have a couple of people successfully making apps. It needs to be bigger! We are now told to await the arrival of the first billion-dollar solopreneur. Hark! The herald angels sing. Glory to the slop-born king! However, we shouldn’t get our hopes up. Setting aside how highly unlikely this is, people also win the lottery, so unless we have a mass of billion-dollar solopreneurs, it’s not proof of much. However, whenever people have strongly held beliefs, they will always point to exceptions as the rule.
It’s far more common for people to talk about a single person making a million-dollar app, and that we all can make them now. Even if this were true, it’s not like billions of people are going to make million-dollar apps or profit from a trillion new books. No degree in economics is necessary to see that the numbers don’t work. Besides, if billions of people can and will do something, then the whole enterprise becomes devalued.
The slop architecture deprives us of so much, sucking the soul out of activities until only the shriveled husk remains. There’s no learning with the slop architecture. No growth. No Reflection. No Satisfaction. It even robs us of a sense of style, something so foundational to the satisfaction of human artistic pursuits. But all things require sacrifice on the pyre of optimization. In the end, the slop architecture doesn’t democratize. It devalues, degrades, and destroys.
In the end, the slop architecture doesn’t democratize. It devalues, degrades, and destroys.
The Friction Is The Point
I’m going to let my friends in tech in on a secret, which isn’t a secret at all. The friction of an activity is directly related to the value you receive from it. The mistake being made is comparing an activity’s friction to the load time of an application or streamlining a user interface. I’ve written previously about how the next generation could be known as The Slop Generation and how we continue to devalue art. However, the removal of friction creates harmful follow-on effects.
Imagine telling Alex Honnold, “Dude, you don’t need to free solo El Capitan. We have a helicopter that can drop you off at the top.” People may see this example as silly because Alex obviously climbs mountains for reasons other than getting to the top, but it’s a mistake to assume other pursuits don’t contain similar value purely because they aren’t mountain climbing. Deep experiences don’t result from things that provide instant gratification or have little friction. Nobody finds meaning in a prompt or the resulting generation.
Deep experiences don’t result from things that provide instant gratification or have little friction.
People may see this example as silly because climbing a mountain without ropes is obviously different from something like writing a song. Except it’s not when viewed through the lens of experience. Alex Honnold doesn’t free solo mountains to get to the top or because ropes and safety equipment are too expensive; he does it because he knows there is value in the friction of his experience. He’s both challenging himself and learning about himself at the same time. He’s having an actual experience, which is hard to describe to people who have never had one. This experience enriches the conclusion of the activity, the accomplishment, which coincidentally happens to be getting to the top. However, when pursuits are framed in terms of the end results, it appears that reaching the top is the goal, and the removal of friction is logical.
Most people will never free solo a mountain, compete in the Olympics, or achieve any of the other remarkable feats that athletes at the top of their game accomplish, but that doesn’t mean we can’t have similar and fulfilling experiences, and we do this through exploration and conquering friction. When you are operating at the top of your game, you realize you aren’t competing with others, but yourself.
An artist puts a piece of themselves inside every work of art they create. AI deprives artists of having a piece of themselves included in the art, making the generated output purely an artifact of running a tool.
Slop Is Here To Stay
Immediately after Ozzy Osborne died, Oz Slop invaded social media. The prince of darkness himself fell victim to people’s boredom and lack of creativity. People chose to pay tribute to him, not through stories and anecdotes, but by slopping him into manufactured content. I can’t think of a more insulting way to pay tribute to an artist, but this is our future. Slop instead of something to say. Slop instead of stories and memories. Slop instead of emotion. Slop as a coping mechanism. May the slop be with you.
A disheartening thought is that no matter what happens to the market for generative AI, the slop will remain. People post this slop not because they enjoy it, but purely because it gives them something to post. Slop content is a stand-in for having something to say. It’s easy to generate and requires little thought, the perfect complement to today’s reactionary and performative social media environments.
In a way, this trend could create a new line of demarcation, where we start referring to things as “Before Slop” and “After Slop” to identify the creative expressions that preceded and followed the arrival of AI-generated content.
Conclusion
In the end, the slop architecture doesn’t generate experiences. Nobody is going to be on their deathbed mulling over their favorite prompts or sit down with friends and reminisce about the time they poked at a generative AI system for hours trying to get it to generate a particular image. The slop architecture doesn’t create a legacy or generate stories worth remembering or worth sharing, just pieces of forgotten garbage littering the digital landscape.
Although AI has taken a hit in the past few weeks, the vibes are still strong and infecting every part of our lives. Vibe coding, vibe analytics, and even vibe thinking, because well, nothing says “old” like having thoughts grounded in reality. However, an interesting trend is emerging in software development, one that could have far-reaching implications for the future of software. This is a type of code roulette where developers don’t know what code will execute at runtime. Then again, what’s life without a little runtime suspense?
Development and Degraded Performance
The world runs on software, so any trend that degrades software quality or increases security issues has an outsized impact on the world around us. We’ve all witnessed this, whether it’s the video conferencing app that periodically crashes after an update or a UI refresh that makes an application more difficult to use.
Traditionally, developers write code by hand, copy code snippets, use frameworks, skeleton code, libraries, and many other methods to create software. Developers may even use generative AI tools to autocomplete code snippets or generate whole programs. This code is then packaged up and hosted for users. The code stays the same until updates or patches are applied.
But in this new paradigm, code and potentially logic are constantly changing inside the running application. This is because developers are outsourcing functional components of their applications to LLMs, a trend I predicted back in 2023 in The Brave New World of Degraded Performance. In the previous post, I covered the impacts of this trend, highlighting the degraded performance that results from swapping known, reliable methods for unknown, non-deterministic methods. This paradigm leads to the enshittification of applications and platforms.
In a simplified context, instead of developers writing out a complete function using code, they’d bundle up variables and ask an LLM to do it. For simplicity’s sake, imagine a function that determines whether a student passes or fails based on a few values.
def pass_fail(grade, project, class_time):
if grade >= 70 and project == "completed" and class_time >= 50:
return "Pass"
else:
return "Fail"
If a developer decided to outsource this functionality to an LLM inside their application, it may look something like this.
prompt_pass = """You are standing in for a teacher, determining whether a student passes or fails a class.
You will use several values to determine whether the student passes or fails:
The grade the student received: {grade}
Whether they completed the class project: {project}
The amount of class time the student attended (in minutes): {class_time}
The logic should follow these rules:
1. If the grade is above 70
2. If the project is completed
3. If the time in class is above 50
If these 3 conditions are met, the student passes. Otherwise, the student fails.
Based on this criterion, return a single word: "Pass" or "Fail". It's important to only return a single
word.
"""
prompt = prompt_pass.format(grade=grade, project=project, class_time=class_time)
response = client.models.generate_content(model="gemini-2.5-flash", contents=prompt)
print(response.text)
As you can see, one of these examples contains the logic for the function inside the application, and the other has the logic existing outside the application. The prompt is indeed visible inside the application, but the actual logic exists somewhere in the black box of LLM land.
The example using code has greater visibility, and it’s far more auditable since the logic can be examined, which makes it far easier to debug when issues arise, and of course, it’s explainable. The real problem lies in execution.
The written Python function approach gives you the same result based on the input data every single time, without fail. The natural language approach, not so much. In this non-deterministic approach, you are not guaranteed the same answer every time. Worse yet, when this approach is used for critical decisions and functionality, the application can take on squishy and malleable characteristics, meaning users can potentially manipulate them like Play-Doh.
At first glance, this example appears silly, as writing out the logic in natural language seems more burdensome than using the simple Python function. Not to mention, slower and more expensive. But looks can be deceiving. People are increasingly opting for the natural language approach, particularly those with only minimal Python knowledge. This natural language approach is also more familiar to people who are more accustomed to using interfaces like ChatGPT.
Execute and Pray
However, let’s take a look at another scenario. In this scenario, a developer wants to generate a scatter plot using the Plotly library. In this case, we have some data for the X and Y axes of a scatter plot and use Plotly Express, which is a high-level interface for Plotly (as a developer may when plotting something so simple).
This is a simplified example, but in this case, we can clearly see the code that generated the plot and be certain that this code will execute during the application’s runtime. There is control over the imports and other aspects of execution. It also makes it auditable and provable.
Now, what happens when a developer allows modification of their code at runtime? In the following example, instead of writing out the Plotly code to generate a scatter plot, the developer requests that code be generated from an LLM to create the graph, then executes the resulting code.
prompt_vis = """You are an amazing super awesome Python developer that excels at creating data visualizations using Plotly. Your task is to create a scatter plot using the following data:
Data for the x axis: {xdata}
Data for the y axis: {ydata}
Please write the Python code to generate this plot. Only return Python code and no explanations or
comments.
"""
prompt = prompt_vis.format(xdata=xdata, ydata=ydata)
response = client.models.generate_content(model="gemini-2.5-flash", contents=prompt)
exec(clean_response(response.text))
As you can see from the Plotly code in this example… Of course, you can’t see it because the code doesn’t exist until the function is called at runtime. If you are curious, the first run of this generated the following code after cleaning the response and making it appropriate for execution.
The AI-generated code creates the same graph as the written-out code in the previous example, despite being different. You may be wondering what the big deal is since the result is the same. The concern stems from several reasons, but primarily, allowing an LLM to generate code at runtime is not robust and leads to unexpected outcomes. These outcomes may include the generation of non-functional code, incorrect code, and even vulnerable code, among others.
For a simple example, as the one shown in this post, the chances of getting the same or incredibly similar code returned from the LLM are high, but not guaranteed. For more complex examples, such as those developers may want to use this approach for, the odds increase that the generated code will change more frequently.
Additionally, I implemented a quick cleaning function called clean_response to remove non-Python elements, such as text and triple backticks, from the response. The LLM can introduce additional unexpected characters that end up breaking my cleaning function and making my application fail. The list goes on and on, but a larger danger lurks in the background.
Whose Code Is It Anyway?
If you are versed in security and familiar with Python, you may have noticed something in the LLM example: The use of the Python exec() function. The exec () and eval() functions in Python are fun because they directly execute their input. Fun as in, dangerous. For example, if an attacker can inject input into the application, they can affect what code gets executed, leading to a condition known as Remote Code Execution (RCE).
An RCE is a type of arbitrary code execution in which an attacker can execute their own commands remotely, completely compromising the system running the vulnerable application. They can use this access to steal secrets, spread malware, pivot to other systems, or potentially backdoor the system running the application. Keep in mind, this system may be a company’s server, cloud infrastructure, or it may be your own system.
Anyone following security issues in AI development is aware that RCEs are flying off the shelves at alarming rates. A condition that was previously considered a rarity is becoming common. We even commented during our Black Hat USA presentation that it was strange to see people praising CISA for promoting memory safe languages to avoid things like remote code execution, while at the same time praising organizations essentially building RCE-as-a-Service. Some of this is mind-boggling, since in many cases, outsourcing these functions isn’t a better approach. In the previous example, writing out the Plotly code instead of generating it at runtime is relatively easy, more efficient, and far more robust.
Up until AI came along, the use of Python exec() was considered poor coding practice and dangerous. Now, developers shrug, stating that’s how applications work. As a matter of fact, agent platforms like HuggingFace’s smolagents use code execution by default. This is a wakeup. So, we dynamically generate code, provide deep access, and the ability to call tools, all with a lack of visibility. What could possibly go wrong???
Not only have developers chosen paradigms to generate and execute code at runtime, but worse yet, they’ve begun to perform this execution in agents with user (aka attacker) input, executing this input blindly in the application. In our presentation titled Hack To The Future: Owning AI-Powered Tools With Old School Vulns at Black Hat USA this year, we refer to this trend as Blind Execution of Input, which is the purposeful execution of input without any protection against negative consequences. This condition certainly leads to RCE and other unintended consequences, providing attackers with a significantly larger attack surface to exploit.
An application that takes user input and combines it with LLM functionality is a recipe for a bad time from a security perspective. Another common theme in our presentation, as well as that of other presenters on stage at Black Hat, is that if an attacker can get their data into your generative AI-based system, you can’t trust the output.
Things Will Get Worse
Using the outsourced approach when a more predictable deterministic approach is a better fit will continue to degrade software from a reliability and security perspective and have an impact on the future of software development.
Vulnerabilities in AI software have made exploitation as easy as it was in the 1990s. This was the “old school” hint in the title of our talk. This isn’t a good thing, because the 90s were a sort of free-for-all. Not only that, but in the 90s, we often had to live with vulnerabilities in systems and applications. For example, in one of the first vulnerabilities I discovered against menuset on Windows 3.1, it was impossible to fix. There were no mitigations, and most people were unaware of its existence.
As the outsourcing of logic to LLMs accelerates, things will worsen not only due to incorrect output and hallucinations but also from a security perspective. Anyone paying attention to the constant parade of vulnerabilities in AI-powered software can see this trend with their own eyes. These vulnerabilities are often found in large, mature organizations with dedicated security processes and teams in place to support them. Now, consider startups and organizations that implement their own experiments using non-deterministic software, often with a lack of understanding of how these systems can be manipulated. It’s become a game of speed above everything else.
As I’ve said from the beginning of the generative AI craze, the only way to address these issues is architecturally. Most of AI security is just application and product security, and organizations without these programs in place are in trouble. If proper architecture, design, isolation, secrets management, security testing, threat modeling, and a host of other activities weren’t considered table stakes before, they certainly are now. And possibly not surprisingly enough, they still aren’t being done. Anyone working for a security organization sees this every day.
In essence, developers need to design their applications to be robust to failures and attacks. It helps to consider designing them as though an attacker can manipulate and compromise them, working outward from this premise. As the adage goes, an attacker only needs to be successful once; a defender needs to be successful every time. This makes something that sounds great in theory, like being 90% effective, sound less impressive in practice.
Keep in mind that performing a code review won’t provide the same visibility as it has traditionally. This should be obvious since the code that would be audited doesn’t exist until runtime. You’ll have to pay more attention to validation routines and processing of outputs, putting huge question marks over the black box in the middle. And, of course, ensuring the application is properly isolated.
Some may suggest instrumenting the applications with functionality to perform runtime analysis on the generated code. Sure, it’s possible, but the performance hit would be significant, and even this is, of course, far from a silver bullet. You might not even get the value you think you are getting from this instrumentation. Also, you’d have to know ahead of time the issues you are trying to prevent. That is, unless you plan to layer more LLMs on top of LLMs in a spray-and-pray configuration.
To keep this grounded, all AI risk is use case dependent. AI models don’t do anything until packaged into applications and used in use cases. There may be cases where reliability, performance, and even security are of lesser concern. Fair enough, but it’s a mistake to treat all applications as though they fall into this category, and it’s far too easy to overlook something important and view it as insignificant.
If you work at an organization that isn’t building these applications and think you’re safe, you might want to think again, because you are at the mercy of third-party applications and libraries. It would be best to start asking hard questions of your vendors about their security practices as they relate to applications you purchase. Especially applications that use generative AI to generate code and execute it at runtime.
Near the end of our presentation, we had some advice.
Whether outsourcing the logic of an application to LLMs or having the LLM dynamically generate code, assume these are squishy, manipulable systems that are going to do things you don’t want them to do. They are going to be talked into taking actions that you didn’t intend, and fail and hallucinate in ways you don’t expect. Starting from this premise gives a proper foundation for deploying controls to add some resilience to these systems. Of course, not taking these steps means your applications will contribute to the ongoing dumpster fire rodeo.
Although the singularity isn’t here, the shitularity certainly is, and it’s moving to infect every corner of our humanity. In the shitularity, shit’s upside down. Where you have people welcoming the extinction of humanity and worshiping marketing material as prophecy, it’s this environment that spawns people like Bryan Johnson and elevates them to hero status. That’s right, a dude who obsesses over his son’s nightly boners while occasionally also using him as a blood boy has been hoisted up and put on a pedestal. Although you’ll be happy to know that as of January 2025, he’s no longer using his son as a blood boy, no update on the boners.
You might wonder what kind of world would promote someone like Bryan Johnson to the level of a deity. Although dying is certainly an uncomfortable prospect, it’s not purely about him trying to cheat death, at least, not completely. It’s true that the not dying aspect is part of Kurzweilian transhumanism, but Bryan’s popularity is about something far simpler. Numbers.
You see, if you can turn health and happiness into a set of numbers, then you can measure them. If you can measure them, you can optimize the shit out of them. It’s this optimization ethos that drives everything in modern tech movements, and if it works for tech, it must work for humanity. Of course, to believe this, one has to set aside Goodheart’s Law.
If you can turn health and happiness into a set of numbers, then you can measure them. If you can measure them, you can optimize the shit out of them.
Another thing to realize is that his spiritual commentary on superintelligence isn’t an outlier in the community. Here’s a clip of him from the Honestly podcast where he says we are creating god in the form of superintelligence, and we had it backward all along.
For the tech community, Bryan has come to symbolize the physical embodiment of hustling. After all, he’s performing every movement of his hustle publicly. He’s suffering for our sins of human mortality on our social media timelines.
The Cult That Requires Supplements
Bryan Johnson is creating a cult, but instead of the traditional cult leader approach of claiming to be a prophet or an incarnation of a god, Bryan has made himself the god, that is, until we create superintelligence. He’s a god of a new and everlasting covenant, one that requires supplements. For $412 a month, you too can remake yourself in Bryan’s image. After all, Christ died, but you don’t have to.
This package is topped off with a bottle of Snake Oil, because nothing screams modern fashion like saying the quiet part out loud.
Bryan has joined a cadre of crackpots, including people like Ray Kurzweil and Alex Jones, who latched on to an age-old money-making scheme. When all else fails, sell supplements. Unlike the old snake oil salesmen hawking bottles out of a wagon moving town to town, he’s got a website and a social media following. Supplements aren’t drugs and require no proof of effectiveness, which means faith is part of the bargain. Perfect. He’s selling highly nutritious communion wafers that will barely sustain your existence. Body of Bryan. One thing I’ll say about the old school snake oil is that at least it would get you drunk; Bryan’s gives you prediabetes.
The living forever bit is telling people what they want to hear. It’s part of the performance to get people to buy supplements and swag. By buying the swag and sharing his parables, sorry, social media posts, people can signal their affiliation. Many people now prominently feature “longevity” in their profiles along with e/acc, magic internet money, and whatever other crazy horseshit they believe that is anti-human.
The concept of “A Bryan Johnson” was inevitable in our current environment. Someone who tells us we can transcend death if we only believe hard enough and shell out some cash.
Which Johnson?
When I learned of Bryan Johnson and his whacked out antics, I considered the contrast with Brian Johnson. My mind immediately went to considering which of these two, on their deathbed, will have the fondest reflection of their lives? Yes, sorry to break this to everyone, but Bryan is absolutely going to die.
Let’s look at the two Brian/Bryan Johnsons. One is the 77-year-old singer of the band AC/DC. The other is the 47-year-old entrepreneur thinking he won’t die. One is out there living his best life, racing around in cars and having a good time. The other is not living life at all, choosing instead to torture himself in an elaborate performance. One sells music and good times, the other sells supplements and pain. One wants to salute those about to rock, the other salutes stunts camouflaged as experiments. I could go on, but you get the picture.
When it comes to living life and loving life, here’s Brian Johnson in 2009 running around the stage, hanging off a rope, ringing a gigantic bell. Now tell me, who’s having more fun as they age? Bryan Johnson wishes he had a following like this.
We don’t need to be rock stars to live a fulfilling life. Sure, the money and fame don’t hurt, but there are many areas of satisfaction that we can share. However, we are allowing people obsessed with technology to define what a good life is supposed to be, which is dangerous because these perspectives often miss the point entirely.
For example, consider the audience response in that video. Surely, there are more optimized ways to deliver music to your ears. If you are looking for a deeper experience, a VR headset and more cameras on stage would deliver a far more optimized experience tailored purely to your preferences. Hell, you could even choose which camera to watch at any given time! Surely, this must be better than buying tickets, getting in a car, finding a place to park, waiting in line, and then waiting for the band to start playing.
As optimization often does, it sheds value as it optimizes.
As optimization often does, it sheds value as it optimizes. Viewing the audience, it’s obvious to any actual human being that the experience those people are having isn’t the same experience an optimized VR experience provides. What people attending a live performance realize is that they are part of the performance along with the artist. Viewing the world this way opens the door for us to have all kinds of experiences despite not being rock stars and having money flying out of our pockets. These same doors shut through optimization.
Another point is that living life trying not to die is not living life at all, like the person so scared of dying in a plane crash that they never travel and see the world. But we’ve allowed a strange reframing of this experience in that we aren’t missing out on experiences; we are prolonging our existence, which opens the door to vastly more experiences. This is yet another argument that’s technically true, but practically false. Sure, we could live at the hospital, and we’d always have a medical team at our disposal, which could extend our life, but that’s not living life or gaining meaningful and fulfilling experiences.
The same sort of missing out happens if we live life with the belief that we won’t die. We always put off potential experiences because we’ll just do them later. We’ve all known people whose lives were cut short and who missed out on things they’d like to do. It’s the temporary nature of life, along with its stunning finality, that pushes us to live a good life, to seek out experiences instead of putting them off, to be fulfilled.
As an aging adult, I don’t want to die. I’ve also had many people around me pass away, which puts things in perspective. The prospect of spending years of my life focused on trying not to die instead of living life doesn’t appeal to me either. This doesn’t entail nightly binge drinking or a mainline of ice cream into my veins. I go to the gym five days a week, so health is certainly on my mind. However, when health and longevity become a hustle, something that must be performed and optimized to realize the true benefits, we need to acknowledge that we are doing something else.
Of course, all of this longevity garbage only takes into account physical health. But mental health is equally, if not far more important than physical health when it comes to longevity. All of this grinding away on tests on the body and measuring boners doesn’t leave much time for fulfillment and happiness. Spending time with your family and friends without drawing their blood, asking about erections, or shaming them because they aren’t fasting hard enough.
The thought of dying is scary, but the thought of dying without living life to the fullest is absolutely terrifying. Thankfully, we still live in a world where Brian Johnson is far more well-known (and loved) than Bryan Johnson, but it’s a mistake to be complacent about these things. Cults have an odd way of attracting followers.
How Did We Get Here?
Bryan Johnson is an example of how tech bro culture infiltrates broader cultural movements. Bryan didn’t invent this himself. He drew inspiration from earlier techno-utopians like Ray Kurzweil. Although thankfully, society at large still rejects the complete tech bro vision of culture, it’s leaky. Take a look at these new Olympic-style games.
It’s a mistake to assume that because this example involves something physical, it has nothing to do with tech bro culture. After all, human evolution is slow, but you can get a new iPhone every year, so why not roid it up and speed things along? This disturbing logic makes sense to many people.
We are dazzled by spectacles and monstrosities, so when someone pitches human excellence through augmentation by any means necessary, our interest is piqued. This exhibit is evidence of the direct impact of the current tech bro mindset on our culture. Although it would be easy to write this event off as stupid people doing stupid shit, this is tech bro culture at work. The gnashing of teeth from information overload wears us down into acceptance. And then, more people die.
This mindset warps people’s sense of reality. To the extent that when people share their preferences and they don’t align with Johnsonian expectations, others assume they are lying.
What’s happening in this image is simple. Men chose the “after” image because they thought that’s what they were supposed to prefer. Women just selected their preference. This isn’t rocket science or some epic conundrum that we need to investigate. Bryan Johnson’s “before” photos look better, too. That is, unless you prefer the aesthetics of an unwrapped mummy.
Against Life Extension
The argument for not living a longer life seems rather silly and self-sabotaging until you realize that the implications of living longer aren’t pretty. Anyone with aging parents or grandparents can attest to this. The body may live on, but the mind doesn’t cooperate.
Technology has already granted us a longer lifespan. However, this longer life span has set us on a collision course with cognitive decline and a lack of independence. Not exactly the definition of living your best life. Until cognitive ailments are cured and the ability to regenerate functionality is achieved, there isn’t much sense in living longer.
This reminds me of the point made by Aldous Huxley in his novel After Many a Summer Dies the Swan, or that of Tithonus from Greek Mythology, who was granted eternal life, but not eternal youth. I can already hear the tech bro response, but we’ll have fixed that. But unless we can regenerate functionality, then the prospect of living to 150 years old is downright terrifying.
In the past, people have speculated about cryonics, being stored frozen and then thawed out when the technology is advanced enough to revive them and cure their illnesses. This conjures a scarier and more realistic picture.
Since it’s easier to prolong the life of the body than to address cognitive decline and functional regeneration, what if the way aging bodies are stored ends up not being cryonics, but in a memory care facility? Imagine all of the tech bros milling around playing bingo, not knowing where they are or what year it is, and where Elon Musk thinks he’s still friends with Sam Altman.
What if the way aging bodies are stored ends up not being cryonics but in a memory care facility?
To the bros who think brain implants and augmentation will save us, not so fast. We should consider that augmentation with technology may actually make neurodegenerative conditions worse. The long-term effects of advanced brain implants aren’t known, but it seems they potentially optimize away the very activities that stave off conditions like dementia. These include activities such as reading, solving puzzles, and writing letters. Even worse, no matter how many brain implants and connections you create, it’s a confused mind with access to even more data and stimulation. This seems to create an even worse situation than what happens biologically.
In a new article, Francis Fukuyama argues against life extension, calling out, “Nearly half of all seniors in their mid- to late-80s suffer from some form of degenerative neurological disease like Alzheimer’s or Parkinson’s, in the later stages of which they are completely unable to care for themselves.” This isn’t a pretty prospect. Fukuyama also explores other aspects, such as the economic impacts.
These tech bros don’t just feel they are extending their lives. They think they will live forever. If that’s the case, then the memory care units will be packed.
Living A Good Life
What Bryan Johnson and his cohort fail to realize is that overall health cannot be reduced to a set of numbers that can be measured and optimized. Leave it to tech bros to reduce life down to a set of OKRs. So much of life is lived and enjoyed outside of metrics. Mental states have a significant impact on our longevity, and supplements won’t improve that. It’s the other joys in life that keep us young.
The definition of a good life is certainly subjective and sometimes situational, but most people can recognize it when they see it. Or at least, they can today. In the near future, that may not be the case. We are approaching a world where people prefer all experiences to be mediated through technology, but is this a good life?
This is reminiscent of E.M. Forrester’s 1906 short story The Machine Stops, in which people are horrified by direct experiences. In the story, when the character Vashti sees the vast flank of the ship stained from exposure and encounters smells that were neither strong nor unpleasant, she is horrified. We, too, are beginning to prefer mediated experiences over direct experiences. I think we’ll find that in the long run, mediated experiences don’t provide true fulfillment. Four hours on TikTok doesn’t compare to four hours of hiking in a new location, for example.
We are allowing options for a good life to dwindle, enabled by a performative culture and technology. In the immortal words of the American philosopher Tom Keifer, you don’t know what you got till it’s gone. If you’ve never had something, you don’t notice it being gone. Some of us can decide to regress to a previous mean, but this mean may not exist for future generations. The thought of picking up a guitar and doing it ourselves won’t occur as an option because there are far easier ways to make guitar sounds. The point of learning and playing any instrument isn’t to make noise. The noise is a byproduct of the satisfaction derived from learning and playing. This perspective can be applied to many aspects of life that bring meaning and fulfillment.
I find this incredibly sad for future generations, as many avenues to find fulfillment and satisfaction are collapsing. These are avenues that numbers cannot measure. However, we are told these activities are unoptimized, and by applying technology, they can be made “better.” As we’ve seen time and again, adding optimization can render the point of an activity pointless.
Future Prediction
I predict that Brian Johnson will live to an older age than Bryan Johnson. In fact, I predict a future headline. Bryan Johnson, a man focused on longevity, claiming he’d live forever, died at the age of 67. See you in 20 years.
We are continually inundated with examples of silly errors and hallucinations from generative AI. At this point, it’s no secret to anyone on the planet that these systems fail, sometimes at rather high rates. These systems also have a tendency to make stuff up, which isn’t a good look when that data is used for critical decisions. We’ve become numb to this new normal, creating a dangerous condition where we check out instead of recheck. But what happens when these errors and hallucinations become facts, facts that may be impossible to dispute or lurk in the background unseen and uncorrected?
Perspectives From Our Younger Selves
Imagine traveling back in time for a conversation with our younger selves about the current state of AI.
Younger: Wow, it must be great to live in a world without cancer or dementia. Older: No, we haven’t cured cancer or dementia. Younger: Well, at least people are super smart now. Older: No, there are still many dumbasses. Younger: At least you have systems that don’t make mistakes. Older: No, they make mistakes all the time. Younger: Then, what in the hell do you do with systems like this? Older: Mostly memes and short videos of stupid shit. Oh, we even try to impress world leaders with what they’d look like as a baby with a mustache.
Although it may seem silly, this thought experiment is informative. It puts our current AI moment in perspective and should add some humility. These systems aren’t the magnificent, magical boxes capable of handling every task with equal proficiency in both work and life. They are tools that we can use for specific tasks, far from the perfected AI of science fiction, and this is where the issues creep in.
Icebergs, Grenades, and Damage
I’ve made the grenade analogy before relating to agents. It’s an apt analogy because it’s something that causes damage, but not immediately. It’s like the classic joke grenade, which is a prank you play on your friends with the expectation of future laughter. Only with AI, the result isn’t a barrel of laughs. It’s a barrel of something that stinks and should be spread over a field as fertilizer.
The mistake is that seeing so many instances of these issues gives us the false impression that these issues are being caught and possibly even corrected. Think of issues like hallucinations as an iceberg. There are far more instances beneath the surface that lie unseen, lying in wait to send our ship to the depths.
There’s also the problem that not all conditions of hallucinations are so easy to identify. The ones that seem to get identified are those that are blatantly obvious or require additional validation, such as checking the cases referenced in a legal document. This is why it seems that only lawyers and politicians are making fools of themselves with AI. The landscape is far broader than these two categories.
It’s also instructive to see how people respond when these issues are brought to light. In the recent MAHA report scandal, the White House spokesman referred to AI hallucinations as “formatting issues.” Yeah, right. Imagine walking into your bank and finding out you have no money in your account. Frantic, you ask the teller what’s going on, and they tell you that you have no money because of a formatting issue. We can’t let people downplay these problems because they are common. It’s because they are common that we need to be more concerned.
We can’t let people downplay these problems because they are common. It’s because they are common that we need to be more concerned.
Although some instances may seem silly, there are no doubt real consequences. Such as AI hallucinating into people’s medical records, because we all know that can’t end badly. Hypothetically, let’s imagine that the generative AI system utilized is 99% accurate, which is enormously far from reality. Performing 10,000 transactions/results/outputs a day could potentially yield 100 issues. Crank that up to 1,000,000 a day, and that’s 10,000. This is terrifying when considering the realistically high error rates that these systems actually exhibit. There’s no doubt a river of manure flowing into data stores. The pin has been pulled.
The nature and pattern of errors differ significantly between AI and humans.
I can already feel the AI crowd’s eyes rolling, opening their mouths to issue the overused retort, “But humans make mistakes too.” Yes, they do, but human mistakes and AI mistakes aren’t the same. The nature and pattern of errors differ significantly between AI and humans. Human error tends to be more predictable, with errors and mistakes clustering around areas such as low expertise, fatigue, high stress, distraction, and task complexity. In contrast, AI errors can occur randomly across all problem spaces regardless of complexity. This is why AI systems continue to make boneheaded errors on seemingly simple problems.
A nurse may indeed make a mistake in an annotation in a patient’s medical record, such as a misspelling, incorrect date, or time. More severe incidents, such as mixing up patients or medications, can also occur, but are much rarer. Nurses aren’t going to fabricate a whole event that didn’t happen as a mistake.
With the widespread use of AI, there are bound to be significant impacts. They won’t all cause major harm, but they will all tell an inaccurate story. Severity will depend on the system consuming this data and its intended use. Some will be purely annoying, but others will have serious consequences. A person with hallucinated data in their medical record may be prescribed the wrong medication or a medication to which they are allergic. I’m speaking in vagaries here because the extent of the problem isn’t fully understood, but one thing is certain: it’s getting worse as the usage of generative AI expands.
Another problem will be tracing these issues back to their source. It won’t always be obvious when a mistake originates from an AI system or a human. After all, these systems are meant to augment human processes. When it comes to blame, humans will always blame AI, while system owners will always blame the humans. It’s a mess.
The New Truth
Ultimately, we’ll uncover a disturbing reality. In many cases, hallucinated data becomes the truth. After all, it’s the “fact” that’s in the data store. Imagine trying to dispute this with someone at the DMV, customer service, our bank, and the list goes on and on. We become yet another in the long line of those contesting the “facts” on hand, directed to a Kafkaesque nightmare as we have to navigate some bureaucratic maze attempting to get a resolution.
A more cementing factor would be if the data is incorrect and there is no human to consult, only an AI making decisions based on the data it has. It offers apologies, not resolutions. And these are only instances that we become aware of.
Many stealthy decisions occur in the background, made by invisible systems that utilize these new “facts” to make determinations that impact our lives, our families, and our health. We may never fully understand the impact this new truth has on us, our families, or our future.
All of this damage stems from the systems we are using right now, today. Even if better, more accurate systems emerge, the damage being done today still stands. These new, more advanced AI systems may be trained or fine-tuned on hallucinated data generated by current AI systems. So, we’ve got that to look forward to.
These new, more advanced AI systems may be trained or fine-tuned on hallucinated data generated by current AI systems.
The Cause
Some of these issues can be attributed to automation bias, but it’s far from the whole explanation. There is a push from the top to utilize AI everywhere possible. Many companies are asking employees to do more with less. Well, when you have less time, one of the things you spend less time doing is worrying about quality or accuracy.
We’ve also been inundated with CEOs and other business leaders proclaiming their intent to replace everyone with AI. There isn’t much motivation to do a good job in environments like this. We’ve seen this happen in the past with jobs getting outsourced.
The reality is that these are self-inflicted wounds caused by the rapid adoption of error-prone technologies being thrown into use cases where the negative impacts aren’t considered.
What We Can Do
If companies and individuals intend to augment their activities to optimize and increase efficiency, they need to ensure that this optimization doesn’t cause harm. There needs to be processes in place to identify and address these issues before they cause a problem. This isn’t happening today.
Unfortunately, there isn’t much we, as future victims, can do, especially since we don’t know the extent of the problem. It’s impossible to be aware of all the people using these systems today and how they may affect us in the future. From government to private business, these tools are utilized for a wide range of tasks, both mundane and critical.
I’m not a fan of big government or excessive regulation, but it’s hard to see how these issues can be solved any other way, since we only become aware of the harm after it has happened. Consumer protection is something a government is far better equipped to handle than a handful of consumers. The tech crowd’s claims that burdensome regulations inhibit innovation are absolutely true, and this shouldn’t be the goal. However, the absence of existing regulations harms people, as consumers are powerless to take any action in their defense. Unfortunately, reasonable, level-headed regulations are not in our future.
At the very least, we should avoid AI in high-risk or safety-critical use cases. The thought of ChatGPT running something like air traffic control is terrifying. However, handing out this advice at this point seems like trying to reason with a hurricane. Admittedly, for users, it may not be immediately apparent that the tasks they are performing or the data they are collecting can ultimately lead to one of these scenarios.
The Problem At Our Feet
AI hallucinations and other inaccuracies are like grenades with the pin pulled, only instead of chucking them far away from ourselves, we’ve dropped them at our feet, staring at them, wondering what happens next. The only question is, how long will it take for us to find out?
What’s the effect of exposing children to AI at a very young age? Well, we are about to find out. President Trump signed an executive order called Advancing Artificial Intelligence Education For American Youth, and, in the face of the other executive orders pushed by the administration, it may be tempting to consider this order relatively benign. I urge people to reconsider, because this order could result in catastrophic and irreparable damage to future generations of children. Move fast and break things is all well and good until the thing being broken is your child.
This move represents many of my fears coming to fruition, with all of the negative aspects I’ve been warning about becoming cemented into the foundation of future generations. You may have heard me talk about conditions such as cognitive atrophy, but early exposure to AI in education can lead to something far worse: cognitive non-development.
There are also technical concerns, including issues with security, privacy, alignment, and reliability. Children are rich sources of data wrapped up in easily manipulable packages, so it’s no surprise that tech companies are opening their AI tools to them. However, I feel these concerns are more evident to most people than the negative cognitive impacts that the introduction of AI to young children creates, especially while their brains are still developing and maturing. These are the issues I highlight here.
Key Points
Since this is a long article, I’ll call out a couple of key points:
Cognitive offloading by children and adolescents to AI short-circuits cognitive development impacting executive functions, logical thinking, and symbolic thought
We convert social to anti-social activities
The very skills kids need to use AI effectively never develop due to the overuse of AI
Core foundations of critical thinking, data literacy, and probability and statistics need to be introduced before any AI curriculum
Worldviews will be shaped by interactions with AI systems instead of knowledge, experience, and exploration
Kids need time to explore the generative intelligence inside their skulls
What Are The Hopes?
Before we begin, it’s helpful to take a step back and consider what the product of this education is supposed to look like. We envision emotionally balanced young adults exercising hardened critical thinking skills and ingenuity to create the next wave of high-tech gadgets. This is the stereotypical AI bro vision of an AI tide lifting all boats, but the reality strays far from the vibes.
There’s nothing fundamentally wrong with this perspective except that exposing children to AI tools beginning in kindergarten almost guarantees the opposite. This is for two primary reasons: the negative cognitive impacts on early childhood and adolescent development, and poor curriculum implementation.
Now, can this program succeed in a way that benefits children and empowers them for the future? Absolutely, but it would be nothing more than success by miracle. A program like this needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs and implements mitigations for these negative effects. This is NOT what we are getting here. This fails 999 times out of 1000, possibly more. Just read the wording of the executive order and imagine people rushing to implement it, along with the bros swarming like flies around a manure pile, anxious to pitch their half-baked products.
The introduction of AI and AI tools so early in childhood education will be yet another big mistake that everyone realizes in hindsight. To set the stage, many fail to realize just how much EdTech has been a failure, and now, without addressing any of the issues, we want to add even more screens in the classroom.
I don’t think everyone involved is a bad actor with perverse incentives. I think most people genuinely want to see children succeed and flourish. However, there is no consideration here for the long-term cognitive impacts on children.
AI In Education
While I was writing this article about AI in K-12, two other articles were released about AI in higher education. The article from New York Magazine about students using ChatGPT to cheat, and the story in Time of a teacher who quit teaching after nearly 20 years because of ChatGPT. The cheating article is creating a flurry of hot takes on social media. We’ve reached a technological tipping point where students don’t see the value in education. They want accomplishment and bragging rights (degrees) without effort. Apparently, attending an Ivy League school is no longer about the education you receive but the vibes you create and consume.
And of course, queue the defensive hot takes.
This is a common retort. The mistake of assuming low-quality Q&A for actual curiosity and insight. This information was available to us all along. It just required more friction to get. So, if this is the case, then the answers we wanted weren’t worth the effort. This is hardly an earth-shattering insight, yet we’re being pitched as though it is. Keep in mind, just because these people aren’t selling a product doesn’t mean they aren’t selling something.
As usual, Colin Fraser is on point.
A problem we’ve always faced is that we never know when we are learning something in the moment that will be valuable later. We exercise a stunning lack of current awareness for future value. This happens in all manner of experiences, but especially in education. Adults lack this awareness, and it’s completely delusional to expect that K-12 students will magically sprout this awareness.
We exercise a stunning lack of current awareness for future value.
There is value in learning things, even things you don’t use for your job. We seem to think learning is contained in individualized components that fit neatly into buckets, but there are no firewalls around these activities. Learning things in one subject is rewarding and beneficial, even to other subjects. Colin is also right about driving the cost of cheating to zero, a major point everyone seems to gloss over.
In his book, Seeing What Others Don’t, Gary Klein tells the story of Martin Chalfie walking into a casual lunchtime seminar at Columbia to hear a lecture outside his field of research. An hour later, he walked out with what turned out to be a million-dollar idea for a natural flashlight that would let him peer inside living organisms to watch their biological processes in action. In 2008, he received a Nobel Prize in Chemistry for his work. This insight doesn’t come from staying in your lane, being single-minded, or asking the right questions to an LLM. Yet, this is exactly the message thrust upon us. AI doesn’t provide the happy accidents that result from exploration and the randomness of life.
Using AI instead of our brains gives us the illusion of being more knowledgeable without actually being more knowledgeable. We shouldn’t underestimate the power of this illusion because it blinds us to certain realities. AI offers an illusion that completing tasks and knowledge acquisition are the same thing, but knowledgeable and productive are completely different attributes. This positive feeling of being more productive masks that we aren’t acquiring knowledge. Numbers end up overshadowing quality, and productivity vibes end up trumping learning.
Some may argue that productive is preferable to knowledgeable in a business context, but that hardly applies in education. The ultimate goal in formal education is to learn, not produce, with the PhD being the exception. Education shouldn’t be about creating useful automatons, despite how many business leaders may want them.
AI In K-12
Introduction in K-12 means that these tools are introduced during critical brain development and could short-circuit the development and maturation of things such as executive functions, logical thinking, and symbolic thought as students offload problems to AI systems. Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools. No matter what the AI bro impulses, we should all agree that exposing kindergarteners to AI is an incredibly bad idea.
Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools.
All of the issues and negative impacts I’ve been pointing out, such as the cognitive illusions created by the personas of personal AI, along with associated impacts such as dependence, dehumanization, devaluation, and disconnection, get far worse when exposed early in childhood and adolescent development because children never discover any other way. Blasting children with AI technology in their most formative years of brain development pretty much guarantees lifelong dependence on the technology. Something that elicits drooling at AI companies, but is hardly in the best interest of human users. What we consider overreliance today will be normal daily use for them. Worldviews will be shaped not by knowledge and experience, but by interactions with AI systems.
There’s something fairly dystopian about prioritizing AI literacy while actual literacy is on the decline , disarming future students from the very skills they’d need to keep AI in check. The impression seems to be that if you can teach kids AI, you can negate negative downturns in literacy. After all, why should something like reading comprehension matter if tools provide the comprehension for us through a mediation layer? Hell, why stop there? Why not apply AI to every task that could possibly be outsourced? We are close to creating a world where raw data and experiences never hit us.
The Future Isn’t Now
In their book AI 2041: Ten Visions for Our Future, Kai-Fu Lee and Chen Qiufan have a story about children who grow up and go through school with companion chatbots to assist them in life. These chatbots adapt to them and assist them in areas where they have challenges. AI systems are ever-present companions following them through school and in life. The story is meant to have the trappings of utopia, but ends up sounding like a dystopian hellscape. To make matters worse, their story considers a perfected AI system that doesn’t have all the issues and drawbacks of today’s AI systems.
We continue to make the mistake of treating the AI systems of today as though they are the AI systems of tomorrow. Encouraged into hyperstition and thought exercises of, “It doesn’t work, but just imagine if it did!” To say that AI will cure cancer and become the cure for all of humanity’s ails may likely turn out to be true, at some point. But these accomplishments have yet to come to fruition, and don’t appear on the horizon either. So, why are we treating these systems as if they’ve already accomplished goals they haven’t? The highly capable tutor/companions of Lee and Qiufan don’t exist, yet we want to apply this non-existent vision to K-12 education as though they do. Even if they did exist, where is all this highly personalized data about your child being stored, and what is being done with it?
Less Capable, More Dependent, and Less Stable
The crux of the issue is that this program will not set kids up for success in an AI world or otherwise. This early exposure will make them less capable, more dependent, and less stable. This curriculum could teach kids all the wrong things, such as that answers can be immediate and simple, and that working out a problem isn’t as important as asking the right questions. We also teach that learning is comfortable. We give the impression that knowing things is not as important as knowing where things are stored. This is all bullshit. Kids can’t summarize their way to knowledge. But, it gets worse.
Children exposed this early never learn how to do things for themselves. They end up outsourcing problems and decisions to AI. Instead of taking feedback on how to solve problems, challenging themselves to learn, they offload the problem to AI, making them incapable and lacking confidence in the absence of technology.
This technology dependence also creeps into their personal lives, meaning going about their typical day becomes unbearable without the ability to mediate through AI. It becomes a source of authority for them and a way to avoid difficult decisions that teach them lessons. It can be hard for us to imagine today the future paralysis created when the technology is absent, even for simple decisions like how to respond to a friend’s message or whether to go outside today.
Many adults may argue that this is a small price to pay for setting kids up for success in the future. There are two flaws here. First of all, this is a monumental price. Second, using technology more doesn’t automatically mean being better at using it. For AI use, the skills you learn outside of AI’s mediation are exactly the skills that make you better at using it.
We need to focus on teaching kids to use their brains, something I never thought I’d have to say when talking about… school.
This is typically when someone brings up the calculator, insinuating that nobody needs to learn math because it exists. Although I disagree, confusing a calculator with AI technology is a mental mistake. Calculators and AI are far from being similar technologies. A calculator isn’t a generalized technology that can be applied to many problem spaces. A calculator doesn’t provide recommendations, advice, or sycophantic outputs. It won’t tell you who to date or be friends with. Oh, and a calculator is always right, unlike AI.
The hypothetical response that gets pitched around is imagining if Einstein or Von Neumann had access to AI and all of the wonderful things that would have sprouted from their genius. Maybe, however, I pose a different experiment. Imagine if Einstein or Von Neumann were a product of AI education from a very early age, where even inane curiosities were immediately satiated by an oracle. The likely output is that nobody would know their names today. We are products of our environments. Remember, there are no happy accidents with AI, only dense data distributions in which everything is shoved. In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
Avoiding Discomfort
Sam Williams from the University of Iowa said, “Now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.” We are looking to apply this in K-12, specifically when we want students to grow.
The truth is, knowledge acquisition isn’t comfortable, and students avoid discomfort like the plague. When we use AI to complete assignments, we aren’t challenging ourselves. We aren’t developing our own perspective and forming new connections between concepts. Students find writing uncomfortable and are quick to outsource to AI, but writing truly is thinking. When we write, we are confronted with our thoughts and perspectives, challenging ourselves and forming new insights. One realization with writing is that the more you do it, the better you get. This realization never comes when it’s constantly outsourced to technology.
Using AI for work-related tasks may be helpful, but using AI for education or even life is idiotic. Yet, we continue to make these foundational mental mistakes. This would be like saying that since Taylorism worked for business, why not apply it to daily life? We all know where that leads.
But we also end up robbing students of a sense of accomplishment and fulfillment, of a long-lasting sense of satisfaction, not to mention the ability to focus. And for what? Because we believe that children will need to be non-thinking automatons to have a chance in the future? This theft will have a lasting impact on the mental health of future generations.
We may experience the extinction of the flow state by never allowing people to enter it in the first place. I’ve heard people argue that they’ve entered a flow state using AI, maybe, but likely the very nature of using AI to complete tasks guarantees that you never enter a flow state. Either people are confused about what a flow state is, or they mistake the illusion of productivity for creativity and flow.
As Ted Chiang mentioned in an article I’ve referenced before, ”Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Going to the gym isn’t comfortable, but the results are physically and mentally rewarding. The mental health benefits of going to the gym aren’t intuitive. After all, how can running on a treadmill or lifting weights, activities that work out your muscles, benefit your mental state? Yet, it does. There are no firewalls around exercise either. Knowing this doesn’t stop us from making the same mistakes in cognitive areas.
When Playing It Safe Becomes The Norm
Using AI to do things is perceived as safe because if the output is wrong, we can blame the AI, versus having to work out a problem ourselves and potentially being wrong. There’s a blame layer between us and the problem.
Let’s take art, for instance. AI art is safe, unchallenging, and unfulfilling, providing no opportunity to learn about ourselves, others, or the world. And yet, the very fact that it’s safe and easy is what makes it so attractive. Failure can result from the paintbrush, but never the prompt.
Failure can result from the paintbrush, but never the prompt.
The best things in life come from not playing it safe. Taking a chance on a job, moving to a new location, or asking a person out on a date are all activities that aren’t safe, but they can end up being the best decisions we’ve ever made. We need to keep this instinct alive in children.
Lack of Resiliency
The more we rely on AI, the less we question its outputs. The more we use AI and our capabilities atrophy, the less capable we become of questioning the outputs and, hence, the more dependent we become. We end up losing a critical capability when we need it the most, or in the case of early childhood exposure, never develop it in the first place.
Modern generative AI is far from error-free. It makes frequent mistakes and hallucinates. Students must construct the cognitive fitness necessary to operate robustly using a technology that makes these frequent mistakes. This fitness isn’t built on a foundation of the same AI that has these issues.
Students also need a foundation and the ability to explore outside AI mediation. This requires both time and foundational courses and concepts. For example, this foundation should include critical thinking, data literacy, and probability and statistics. Early exposure to these concepts with late exposure to AI offers the best chances for students to build this robustness.
From Social to Anti-Social
AI is a fundamentally anti-social technology. From the ground up, we are removing the human and converting it to the non-human. Even social networks are transforming into anti-social networks. With AI’s overuse in children we teach kids that humans are second-class citizens to AI. After all, the sales pitch is that AIs are better at everything, so why should children believe otherwise?
Handing kids an oracle to ask questions not only converts a social activity into an anti-social activity but also shifts authority away from humans and onto technology. This shift would still be bad even if the technology were perfected, but it is far worse given the error-prone technology of today.
Young children are quick to anthropomorphize and will form a bond with non-human companions. Although the video of the little girl not wanting to play with the shitty AI gadget is funny, it won’t last when children are surrounded by AI. Kids will switch from actively using their imagination to becoming passive consumers of AI output.
The human retreat has already begun, as kids prefer interactions with friends mediated by a device. But now tech companies want to take this further. This is all happening outside of education, but kids can’t avoid forced interactions with their companion/tutor/friend/bot in the classroom, reinforcing this retreat.
Much of this slide comes from our tendency to oversimplify, not accounting for the bigger picture and the complexities involved. Take, for instance, a common claim that kids ask many questions, and since AIs never tire of answering them, pairing kids with AI is a natural fit. This seems like an almost throwaway point, a gotcha to any potential critic, but people making this point haven’t thought it through.
First of all, asking questions is a social activity. We interact with other humans in different environments, learning far more than the simple answer to our questions. This activity teaches us essential skills, including ones related to non-verbal communication. Humans also don’t answer questions the same way AIs do, often providing additional context and anecdotes that may further aid us in knowledge acquisition and retention.
This act connects us to other people and the world, making us active participants in something bigger rather than passively consuming an answer. I still remember anecdotes shared from my high school chemistry teacher that stick with me today. We don’t just lose context and perspective from an AI oracle, we lose something human.
When it comes to context, any expert who has asked AI questions about their topic area has been confronted with incorrect information, including something like, “I guess that’s technically true, but it’s hardly the whole story.” And this is what we want to make the norm.
Closing The Curiosity Gap
We are told that asking an AI questions makes people more curious, but AI closes the curiosity gap. By getting an instant answer, we satiate our curiosity and move on to the next thing, only digging deeper or exploring further in cases of pure necessity. This act reinforces low attention spans, further reducing the ability to focus. At some point, System 2 may become extinct. What kind of world will that create, where the world is nothing but hot takes and vibes?
AI satisfies a need for quick answers. However, searching for answers in a more traditional way means other pieces of valuable context surround you. Other rich pieces of information that lead to new ideas and new understanding. Humans have an evolutionary need for exploration.
When using AI for exploration, you are never exposed to ideas and concepts you don’t want to be exposed to. I don’t think we fully grasp just how much of an impact this selection bias will have on the future.
Sure, there are situations where a quick answer is perfectly fine, mundane things like what time a movie starts or what temperature to set your oven to cook a pie. The mistake here is assuming these situations apply evenly to all problem spaces, especially knowledge creation.
My Recommendations
Despite the many unknowns, we shouldn’t shut the door to new innovations because we could slam the door to new solutions. Although it doesn’t exist today, a robust tutoring bot focused on a single purpose and specific subjects could benefit students. The message here isn’t to discard everything but to be cautious, knowing there are tradeoffs and downsides, and incorporate mitigations.
For a program such as this to be successful, it needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs. Without this, you have no way of telling whether you are helping or harming until it’s too late. There is no way to succeed without this step. Beyond this up-front work, I’ll make four other suggestions.
Avoid Early Exposure
Students need plenty of time to develop their brains, not technology. Early exposure should be avoided at all costs. Exposure to this curriculum should happen in high school, preferably in the last two years, not earlier. This is typically when vocational education programs were introduced in schools as well. This gap gives students time to develop skills and experiences outside AI influence and mediation. Kids adapt to technology quickly, so this later exposure will not stunt their capabilities when tools are introduced.
Create A Prior Solid Foundation
Before introducing the AI curriculum, a solid foundation in various topics should be established. This foundation should include courses in critical thinking, data literacy, and probability and statistics. These courses and concepts have been sorely lacking in K-12 education today, and their introduction is long overdue. Arming students with this foundational knowledge will allow them to question the outputs of these systems and create defenses for cognitive creep.
Smart Implementation
The implementation of the courses should be isolated and away from other topics. AI shouldn’t be woven into every topic with a tie-in. Although some would argue that an effective AI tutor could help students struggling with certain subjects, these systems have yet to be developed, much less proven effective. In almost all cases, the AI would be used as an oracle, providing answers directly instead of the necessary understanding and even discomfort that helps students grow.
Solid Curriculum
The curriculum should focus on challenging students, not giving answers. Kids often don’t realize when challenges are beneficial to them. AI tools should continue to be viewed purely as tools, not oracles or companions. The curriculum should focus on avoiding usage as personas and teaching kids how to think in terms of solutions. Appropriate labs should be constructed that give students the ability to explore concepts and define solutions, pulling AI tools in secondarily to complete the tasks and realize a student’s vision. This way, there is a separation between the mental approach and the AI components.
Final Thought
Ultimately, we may end up with anti-social, dependent, and unstable young adults. We take so many skills for granted, skills we don’t realize we developed and honed in school, and now we want to apply technology to optimize these attributes away. We need to give future generations a chance to allow their brains to develop outside of AI mediation. Here’s something to consider.
Imagine an art teacher standing in front of a class. The students aren’t in front of an easel or grasping a pencil, but sitting in front of computers. They aren’t using their hands and tools to create a vision that originates from their minds. Instead, their fingers clack on the keyboard and echo through the class as the teacher instructs them to be more descriptive and provide pleasantries to the machines. Is this really the world we want to immerse children in?
We are moving toward an existence where raw data and experience never hit us as everything becomes mediated. We prefer optimization over expertise. I’m sure the illiterate masses of the Middle Ages felt powerful after leaving a sermon by the literate priest mediating the message of the written word, but that was hardly the best state for individuals. Now we are applying this logic to AI with far-reaching consequences for the everyday life of an entire generation.
In the words of Aldous Huxley, many may mature to “love their servitude,” preferring optimization and rigid structures that take decisions off the table, making things easy, not requiring thought. In Zamyatin’s We, most inhabitants enjoyed living in One State with its rules, schedule, and transparent housing. They were happy to trade free thought and experiences for optimization, comfort, and structure. It needs to be said, over and over again: These are dystopias, not roadmaps.