What’s the effect of exposing children to AI at a very young age? Well, we are about to find out. President Trump signed an executive order called Advancing Artificial Intelligence Education For American Youth, and, in the face of the other executive orders pushed by the administration, it may be tempting to consider this order relatively benign. I urge people to reconsider, because this order could result in catastrophic and irreparable damage to future generations of children. Move fast and break things is all well and good until the thing being broken is your child.
This move represents many of my fears coming to fruition, with all of the negative aspects I’ve been warning about becoming cemented into the foundation of future generations. You may have heard me talk about conditions such as cognitive atrophy, but early exposure to AI in education can lead to something far worse: cognitive non-development.
There are also technical concerns, including issues with security, privacy, alignment, and reliability. Children are rich sources of data wrapped up in easily manipulable packages, so it’s no surprise that tech companies are opening their AI tools to them. However, I feel these concerns are more evident to most people than the negative cognitive impacts that the introduction of AI to young children creates, especially while their brains are still developing and maturing. These are the issues I highlight here.
Key Points
Since this is a long article, I’ll call out a couple of key points:
- Cognitive offloading by children and adolescents to AI short-circuits cognitive development impacting executive functions, logical thinking, and symbolic thought
- We convert social to anti-social activities
- The very skills kids need to use AI effectively never develop due to the overuse of AI
- Core foundations of critical thinking, data literacy, and probability and statistics need to be introduced before any AI curriculum
- Worldviews will be shaped by interactions with AI systems instead of knowledge, experience, and exploration
- Kids need time to explore the generative intelligence inside their skulls
What Are The Hopes?
Before we begin, it’s helpful to take a step back and consider what the product of this education is supposed to look like. We envision emotionally balanced young adults exercising hardened critical thinking skills and ingenuity to create the next wave of high-tech gadgets. This is the stereotypical AI bro vision of an AI tide lifting all boats, but the reality strays far from the vibes.
There’s nothing fundamentally wrong with this perspective except that exposing children to AI tools beginning in kindergarten almost guarantees the opposite. This is for two primary reasons: the negative cognitive impacts on early childhood and adolescent development, and poor curriculum implementation.
Now, can this program succeed in a way that benefits children and empowers them for the future? Absolutely, but it would be nothing more than success by miracle. A program like this needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs and implements mitigations for these negative effects. This is NOT what we are getting here. This fails 999 times out of 1000, possibly more. Just read the wording of the executive order and imagine people rushing to implement it, along with the bros swarming like flies around a manure pile, anxious to pitch their half-baked products.
The introduction of AI and AI tools so early in childhood education will be yet another big mistake that everyone realizes in hindsight. To set the stage, many fail to realize just how much EdTech has been a failure, and now, without addressing any of the issues, we want to add even more screens in the classroom.
I don’t think everyone involved is a bad actor with perverse incentives. I think most people genuinely want to see children succeed and flourish. However, there is no consideration here for the long-term cognitive impacts on children.
AI In Education
While I was writing this article about AI in K-12, two other articles were released about AI in higher education. The article from New York Magazine about students using ChatGPT to cheat, and the story in Time of a teacher who quit teaching after nearly 20 years because of ChatGPT. The cheating article is creating a flurry of hot takes on social media. We’ve reached a technological tipping point where students don’t see the value in education. They want accomplishment and bragging rights (degrees) without effort. Apparently, attending an Ivy League school is no longer about the education you receive but the vibes you create and consume.

And of course, queue the defensive hot takes.

This is a common retort. The mistake of assuming low-quality Q&A for actual curiosity and insight. This information was available to us all along. It just required more friction to get. So, if this is the case, then the answers we wanted weren’t worth the effort. This is hardly an earth-shattering insight, yet we’re being pitched as though it is. Keep in mind, just because these people aren’t selling a product doesn’t mean they aren’t selling something.
As usual, Colin Fraser is on point.

A problem we’ve always faced is that we never know when we are learning something in the moment that will be valuable later. We exercise a stunning lack of current awareness for future value. This happens in all manner of experiences, but especially in education. Adults lack this awareness, and it’s completely delusional to expect that K-12 students will magically sprout this awareness.
We exercise a stunning lack of current awareness for future value.
There is value in learning things, even things you don’t use for your job. We seem to think learning is contained in individualized components that fit neatly into buckets, but there are no firewalls around these activities. Learning things in one subject is rewarding and beneficial, even to other subjects. Colin is also right about driving the cost of cheating to zero, a major point everyone seems to gloss over.
In his book, Seeing What Others Don’t, Gary Klein tells the story of Martin Chalfie walking into a casual lunchtime seminar at Columbia to hear a lecture outside his field of research. An hour later, he walked out with what turned out to be a million-dollar idea for a natural flashlight that would let him peer inside living organisms to watch their biological processes in action. In 2008, he received a Nobel Prize in Chemistry for his work. This insight doesn’t come from staying in your lane, being single-minded, or asking the right questions to an LLM. Yet, this is exactly the message thrust upon us. AI doesn’t provide the happy accidents that result from exploration and the randomness of life.
Using AI instead of our brains gives us the illusion of being more knowledgeable without actually being more knowledgeable. We shouldn’t underestimate the power of this illusion because it blinds us to certain realities. AI offers an illusion that completing tasks and knowledge acquisition are the same thing, but knowledgeable and productive are completely different attributes. This positive feeling of being more productive masks that we aren’t acquiring knowledge. Numbers end up overshadowing quality, and productivity vibes end up trumping learning.
Some may argue that productive is preferable to knowledgeable in a business context, but that hardly applies in education. The ultimate goal in formal education is to learn, not produce, with the PhD being the exception. Education shouldn’t be about creating useful automatons, despite how many business leaders may want them.
AI In K-12
Introduction in K-12 means that these tools are introduced during critical brain development and could short-circuit the development and maturation of things such as executive functions, logical thinking, and symbolic thought as students offload problems to AI systems. Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools. No matter what the AI bro impulses, we should all agree that exposing kindergarteners to AI is an incredibly bad idea.
Instead of having skills atrophy through the overuse of AI, these skills never develop in the first place due to cognitive offloading to AI tools.
All of the issues and negative impacts I’ve been pointing out, such as the cognitive illusions created by the personas of personal AI, along with associated impacts such as dependence, dehumanization, devaluation, and disconnection, get far worse when exposed early in childhood and adolescent development because children never discover any other way. Blasting children with AI technology in their most formative years of brain development pretty much guarantees lifelong dependence on the technology. Something that elicits drooling at AI companies, but is hardly in the best interest of human users. What we consider overreliance today will be normal daily use for them. Worldviews will be shaped not by knowledge and experience, but by interactions with AI systems.
There’s something fairly dystopian about prioritizing AI literacy while actual literacy is on the decline , disarming future students from the very skills they’d need to keep AI in check. The impression seems to be that if you can teach kids AI, you can negate negative downturns in literacy. After all, why should something like reading comprehension matter if tools provide the comprehension for us through a mediation layer? Hell, why stop there? Why not apply AI to every task that could possibly be outsourced? We are close to creating a world where raw data and experiences never hit us.
The Future Isn’t Now
In their book AI 2041: Ten Visions for Our Future, Kai-Fu Lee and Chen Qiufan have a story about children who grow up and go through school with companion chatbots to assist them in life. These chatbots adapt to them and assist them in areas where they have challenges. AI systems are ever-present companions following them through school and in life. The story is meant to have the trappings of utopia, but ends up sounding like a dystopian hellscape. To make matters worse, their story considers a perfected AI system that doesn’t have all the issues and drawbacks of today’s AI systems.
We continue to make the mistake of treating the AI systems of today as though they are the AI systems of tomorrow. Encouraged into hyperstition and thought exercises of, “It doesn’t work, but just imagine if it did!” To say that AI will cure cancer and become the cure for all of humanity’s ails may likely turn out to be true, at some point. But these accomplishments have yet to come to fruition, and don’t appear on the horizon either. So, why are we treating these systems as if they’ve already accomplished goals they haven’t? The highly capable tutor/companions of Lee and Qiufan don’t exist, yet we want to apply this non-existent vision to K-12 education as though they do. Even if they did exist, where is all this highly personalized data about your child being stored, and what is being done with it?
Less Capable, More Dependent, and Less Stable
The crux of the issue is that this program will not set kids up for success in an AI world or otherwise. This early exposure will make them less capable, more dependent, and less stable. This curriculum could teach kids all the wrong things, such as that answers can be immediate and simple, and that working out a problem isn’t as important as asking the right questions. We also teach that learning is comfortable. We give the impression that knowing things is not as important as knowing where things are stored. This is all bullshit. Kids can’t summarize their way to knowledge. But, it gets worse.
Children exposed this early never learn how to do things for themselves. They end up outsourcing problems and decisions to AI. Instead of taking feedback on how to solve problems, challenging themselves to learn, they offload the problem to AI, making them incapable and lacking confidence in the absence of technology.
This technology dependence also creeps into their personal lives, meaning going about their typical day becomes unbearable without the ability to mediate through AI. It becomes a source of authority for them and a way to avoid difficult decisions that teach them lessons. It can be hard for us to imagine today the future paralysis created when the technology is absent, even for simple decisions like how to respond to a friend’s message or whether to go outside today.
Many adults may argue that this is a small price to pay for setting kids up for success in the future. There are two flaws here. First of all, this is a monumental price. Second, using technology more doesn’t automatically mean being better at using it. For AI use, the skills you learn outside of AI’s mediation are exactly the skills that make you better at using it.
We need to focus on teaching kids to use their brains, something I never thought I’d have to say when talking about… school.
This is typically when someone brings up the calculator, insinuating that nobody needs to learn math because it exists. Although I disagree, confusing a calculator with AI technology is a mental mistake. Calculators and AI are far from being similar technologies. A calculator isn’t a generalized technology that can be applied to many problem spaces. A calculator doesn’t provide recommendations, advice, or sycophantic outputs. It won’t tell you who to date or be friends with. Oh, and a calculator is always right, unlike AI.
The hypothetical response that gets pitched around is imagining if Einstein or Von Neumann had access to AI and all of the wonderful things that would have sprouted from their genius. Maybe, however, I pose a different experiment. Imagine if Einstein or Von Neumann were a product of AI education from a very early age, where even inane curiosities were immediately satiated by an oracle. The likely output is that nobody would know their names today. We are products of our environments. Remember, there are no happy accidents with AI, only dense data distributions in which everything is shoved. In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
In the K-12 AI education era, Einstein never stares back at the clock tower on the train, because he’s looking down at his phone.
Avoiding Discomfort
Sam Williams from the University of Iowa said, “Now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.” We are looking to apply this in K-12, specifically when we want students to grow.
The truth is, knowledge acquisition isn’t comfortable, and students avoid discomfort like the plague. When we use AI to complete assignments, we aren’t challenging ourselves. We aren’t developing our own perspective and forming new connections between concepts. Students find writing uncomfortable and are quick to outsource to AI, but writing truly is thinking. When we write, we are confronted with our thoughts and perspectives, challenging ourselves and forming new insights. One realization with writing is that the more you do it, the better you get. This realization never comes when it’s constantly outsourced to technology.
Using AI for work-related tasks may be helpful, but using AI for education or even life is idiotic. Yet, we continue to make these foundational mental mistakes. This would be like saying that since Taylorism worked for business, why not apply it to daily life? We all know where that leads.
But we also end up robbing students of a sense of accomplishment and fulfillment, of a long-lasting sense of satisfaction, not to mention the ability to focus. And for what? Because we believe that children will need to be non-thinking automatons to have a chance in the future? This theft will have a lasting impact on the mental health of future generations.
We may experience the extinction of the flow state by never allowing people to enter it in the first place. I’ve heard people argue that they’ve entered a flow state using AI, maybe, but likely the very nature of using AI to complete tasks guarantees that you never enter a flow state. Either people are confused about what a flow state is, or they mistake the illusion of productivity for creativity and flow.
As Ted Chiang mentioned in an article I’ve referenced before, ”Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Going to the gym isn’t comfortable, but the results are physically and mentally rewarding. The mental health benefits of going to the gym aren’t intuitive. After all, how can running on a treadmill or lifting weights, activities that work out your muscles, benefit your mental state? Yet, it does. There are no firewalls around exercise either. Knowing this doesn’t stop us from making the same mistakes in cognitive areas.
When Playing It Safe Becomes The Norm
Using AI to do things is perceived as safe because if the output is wrong, we can blame the AI, versus having to work out a problem ourselves and potentially being wrong. There’s a blame layer between us and the problem.
Let’s take art, for instance. AI art is safe, unchallenging, and unfulfilling, providing no opportunity to learn about ourselves, others, or the world. And yet, the very fact that it’s safe and easy is what makes it so attractive. Failure can result from the paintbrush, but never the prompt.
Failure can result from the paintbrush, but never the prompt.
The best things in life come from not playing it safe. Taking a chance on a job, moving to a new location, or asking a person out on a date are all activities that aren’t safe, but they can end up being the best decisions we’ve ever made. We need to keep this instinct alive in children.
Lack of Resiliency
The more we rely on AI, the less we question its outputs. The more we use AI and our capabilities atrophy, the less capable we become of questioning the outputs and, hence, the more dependent we become. We end up losing a critical capability when we need it the most, or in the case of early childhood exposure, never develop it in the first place.
Modern generative AI is far from error-free. It makes frequent mistakes and hallucinates. Students must construct the cognitive fitness necessary to operate robustly using a technology that makes these frequent mistakes. This fitness isn’t built on a foundation of the same AI that has these issues.
Students also need a foundation and the ability to explore outside AI mediation. This requires both time and foundational courses and concepts. For example, this foundation should include critical thinking, data literacy, and probability and statistics. Early exposure to these concepts with late exposure to AI offers the best chances for students to build this robustness.
From Social to Anti-Social
AI is a fundamentally anti-social technology. From the ground up, we are removing the human and converting it to the non-human. Even social networks are transforming into anti-social networks. With AI’s overuse in children we teach kids that humans are second-class citizens to AI. After all, the sales pitch is that AIs are better at everything, so why should children believe otherwise?
Handing kids an oracle to ask questions not only converts a social activity into an anti-social activity but also shifts authority away from humans and onto technology. This shift would still be bad even if the technology were perfected, but it is far worse given the error-prone technology of today.
Young children are quick to anthropomorphize and will form a bond with non-human companions. Although the video of the little girl not wanting to play with the shitty AI gadget is funny, it won’t last when children are surrounded by AI. Kids will switch from actively using their imagination to becoming passive consumers of AI output.
The human retreat has already begun, as kids prefer interactions with friends mediated by a device. But now tech companies want to take this further. This is all happening outside of education, but kids can’t avoid forced interactions with their companion/tutor/friend/bot in the classroom, reinforcing this retreat.
Much of this slide comes from our tendency to oversimplify, not accounting for the bigger picture and the complexities involved. Take, for instance, a common claim that kids ask many questions, and since AIs never tire of answering them, pairing kids with AI is a natural fit. This seems like an almost throwaway point, a gotcha to any potential critic, but people making this point haven’t thought it through.
First of all, asking questions is a social activity. We interact with other humans in different environments, learning far more than the simple answer to our questions. This activity teaches us essential skills, including ones related to non-verbal communication. Humans also don’t answer questions the same way AIs do, often providing additional context and anecdotes that may further aid us in knowledge acquisition and retention.
This act connects us to other people and the world, making us active participants in something bigger rather than passively consuming an answer. I still remember anecdotes shared from my high school chemistry teacher that stick with me today. We don’t just lose context and perspective from an AI oracle, we lose something human.
When it comes to context, any expert who has asked AI questions about their topic area has been confronted with incorrect information, including something like, “I guess that’s technically true, but it’s hardly the whole story.” And this is what we want to make the norm.
Closing The Curiosity Gap
We are told that asking an AI questions makes people more curious, but AI closes the curiosity gap. By getting an instant answer, we satiate our curiosity and move on to the next thing, only digging deeper or exploring further in cases of pure necessity. This act reinforces low attention spans, further reducing the ability to focus. At some point, System 2 may become extinct. What kind of world will that create, where the world is nothing but hot takes and vibes?
AI satisfies a need for quick answers. However, searching for answers in a more traditional way means other pieces of valuable context surround you. Other rich pieces of information that lead to new ideas and new understanding. Humans have an evolutionary need for exploration.
When using AI for exploration, you are never exposed to ideas and concepts you don’t want to be exposed to. I don’t think we fully grasp just how much of an impact this selection bias will have on the future.
Sure, there are situations where a quick answer is perfectly fine, mundane things like what time a movie starts or what temperature to set your oven to cook a pie. The mistake here is assuming these situations apply evenly to all problem spaces, especially knowledge creation.
My Recommendations
Despite the many unknowns, we shouldn’t shut the door to new innovations because we could slam the door to new solutions. Although it doesn’t exist today, a robust tutoring bot focused on a single purpose and specific subjects could benefit students. The message here isn’t to discard everything but to be cautious, knowing there are tradeoffs and downsides, and incorporate mitigations.
For a program such as this to be successful, it needs to be well thought out and studied, with a gradual implementation that also considers potential tradeoffs. Without this, you have no way of telling whether you are helping or harming until it’s too late. There is no way to succeed without this step. Beyond this up-front work, I’ll make four other suggestions.
Avoid Early Exposure
Students need plenty of time to develop their brains, not technology. Early exposure should be avoided at all costs. Exposure to this curriculum should happen in high school, preferably in the last two years, not earlier. This is typically when vocational education programs were introduced in schools as well. This gap gives students time to develop skills and experiences outside AI influence and mediation. Kids adapt to technology quickly, so this later exposure will not stunt their capabilities when tools are introduced.
Create A Prior Solid Foundation
Before introducing the AI curriculum, a solid foundation in various topics should be established. This foundation should include courses in critical thinking, data literacy, and probability and statistics. These courses and concepts have been sorely lacking in K-12 education today, and their introduction is long overdue. Arming students with this foundational knowledge will allow them to question the outputs of these systems and create defenses for cognitive creep.
Smart Implementation
The implementation of the courses should be isolated and away from other topics. AI shouldn’t be woven into every topic with a tie-in. Although some would argue that an effective AI tutor could help students struggling with certain subjects, these systems have yet to be developed, much less proven effective. In almost all cases, the AI would be used as an oracle, providing answers directly instead of the necessary understanding and even discomfort that helps students grow.
Solid Curriculum
The curriculum should focus on challenging students, not giving answers. Kids often don’t realize when challenges are beneficial to them. AI tools should continue to be viewed purely as tools, not oracles or companions. The curriculum should focus on avoiding usage as personas and teaching kids how to think in terms of solutions. Appropriate labs should be constructed that give students the ability to explore concepts and define solutions, pulling AI tools in secondarily to complete the tasks and realize a student’s vision. This way, there is a separation between the mental approach and the AI components.
Final Thought
Ultimately, we may end up with anti-social, dependent, and unstable young adults. We take so many skills for granted, skills we don’t realize we developed and honed in school, and now we want to apply technology to optimize these attributes away. We need to give future generations a chance to allow their brains to develop outside of AI mediation. Here’s something to consider.
Imagine an art teacher standing in front of a class. The students aren’t in front of an easel or grasping a pencil, but sitting in front of computers. They aren’t using their hands and tools to create a vision that originates from their minds. Instead, their fingers clack on the keyboard and echo through the class as the teacher instructs them to be more descriptive and provide pleasantries to the machines. Is this really the world we want to immerse children in?
We are moving toward an existence where raw data and experience never hit us as everything becomes mediated. We prefer optimization over expertise. I’m sure the illiterate masses of the Middle Ages felt powerful after leaving a sermon by the literate priest mediating the message of the written word, but that was hardly the best state for individuals. Now we are applying this logic to AI with far-reaching consequences for the everyday life of an entire generation.
In the words of Aldous Huxley, many may mature to “love their servitude,” preferring optimization and rigid structures that take decisions off the table, making things easy, not requiring thought. In Zamyatin’s We, most inhabitants enjoyed living in One State with its rules, schedule, and transparent housing. They were happy to trade free thought and experiences for optimization, comfort, and structure. It needs to be said, over and over again: These are dystopias, not roadmaps.