There is a current push to cram every inch of AI into every conceivable corner of our lives, like someone trying to shove the fifteenth clown into the clown car at the circus. This is a direct result of needing to show investors that the monumental amount of cash being chucked into the furnace is paying off. Consequently, one of the goals is to put this technology even closer to us, giving it hooks into our daily lives in the hopes that it will become indispensable and even addictive.
Often, when someone talks about AI being a threat to humanity, this invokes visions of The Terminator or the scenario of bombing data centers to prevent the spread of evil AI (as if that would help). I don’t take these p(doom) scenarios seriously. However, if we are not careful, I think AI poses an existential risk to our humanity, which is different.
As this technology improves, becomes more reliable, and works its way into our daily lives, playing the role of assistant, companion, and possibly lover, harm will undoubtedly manifest. In this article, I introduce four high-level buckets to consider these harms and discuss something I call a cognitive firewall to protect aspects we value most.
The conversation on AI’s impact is almost universally focused on what we do and how we do it, and almost nothing is said about its impact on who we are and what it does to us. The latter is my primary focus and what many of the articles on this site attempt to address. To be clear, when I use the term “personal AI,” I’m not referring to tools like ChatGPT or Claude. What I’m referring to is the next iteration of these tools that are more connected and more ever-present.
The Assistant and The Companion
The AI technology being developed isn’t constrained to a single task or activity. It can be both an assistant and a companion, and since it can be both, it will be both. I’ve defined six primary personas that personal AI tools will play in daily life.
- The Oracle
- The Recorder
- The Planner
- The Creator
- The Communicator
- The Companion
Given the breadth of functionality supplied by acting as these personas, daily overreliance on personal AI is bound to happen.
In my previous article, I covered why tech companies will embrace this shift but didn’t speak to the direct negative impact on humans. Most of the time, negative impacts are framed around when the system is wrong. For example, if the product tells us to do something dangerous, like eating a poisonous mushroom, or convinces us to self-harm. However, with personal AI tools, the harm extends beyond issues with the system’s accuracy. This means we could have a perfectly functioning tool or product that still produces harm. Let’s take a look at that now.
Negative Human Impacts
As I alluded to in the intro, cognitive outsourcing and overreliance on these tools have negative human impacts. I lump these negative impacts into four high-level categories that I call The Four D’s: Dependence, Dehumanization, Devaluation, and Disconnection. These negative impacts are driven by cognitive outsourcing and the resulting cognitive illusions it creates.
If we are dependent, then we are vulnerable. When we devalue, then we rob ourselves of joy and satisfaction. If we remove fundamental aspects of our humanity, then we dehumanize others and ourselves. If we are disconnected, then we are unaware. There are no firewalls around the Four D’s, so some activities cross all four.
Dependence
Dependence is the core critical harm these systems pose and cuts the widest path. This is because the actions we depend on the tool to perform or provide to us will be both task-oriented and emotion-oriented. Dependence leads to cognitive and emotional atrophy. Today, we aren’t considering how overuse of this technology rewires our brains. This rewiring certainly didn’t start with personal AI. I remember years ago, people were noticing effects with a far more simple technology, Google search. And this was a far cry from a more advanced, ever-present technology with access to all our data. Something that personal AI tools will have.
Skills and Capabilities Atrophy
If we refer back to Sam Altman saying that he forgot how to work without ChatGPT (he wishes ChatGPT was that good), that’s a possibility with near-term personal AI systems. Reduced capabilities are because of cognitive offloading. This offloading is also something I’ve covered before when discussing human augmentation.
Constant outsourcing to technology reduces our capabilities. As an example, let’s look at gaming. Let’s say our companion is always with us, and we use the companion to assist us in playing video games, navigating worlds, and solving puzzles. We come to rely on it. Microsoft’s product is called “Copilot,” after all. In the Personas of Personal AI context, this would be exercising The Oracle and The Planner. However, with this outsourcing, we may forget how to explore the video game world or solve puzzles without this assistance. It’s also possible that children may never develop these skills in the first place. In this example, it’s a video game, but the same holds true for all kinds of human activities.
Emotional Atrophy
The atrophy induced by constant outsourcing to personal AI extends beyond skills and capabilities, affecting our emotional capabilities. Although it can be hard to imagine, we may lose the ability to connect emotionally with our fellow humans. Some might argue it’s already happening. We may even forget how to love as we use AI systems to plug emotional holes and play the perfect friend, lover, parent, and therapist.
Dehumanization
Dehumanization is a word that is often used in extreme contexts associated with the justification of atrocities against other humans, but it’s not always this extreme. If you look up the word’s meaning, you’ll see that the simple definition is to deprive a person or situation of human qualities, personality, or dignity. This is a fitting description since personal AI systems can affect all three of these.
Humanity is on a collision course with dehumanization as charlatans like Ray Kurzweil pitch their nonsense about uploading our consciousness to computer systems, choosing to become disembodied spirits haunting storage buckets of cloud infrastructure. Unfortunately, Kurzweil is not alone.
There are whole movements, such as transhumanism, posthumanism, and even the e/acc movement, that claim humanity is a dated concept, and we need to evolve into something un-human, something more akin to homo technologicus. You even have people like Elon Musk making the perfectly sane argument that we’ll need to remove our skulls to implant more electrodes to communicate with computers. I’ve challenged these narratives before. Needless to say, the road to utopia is going to be paved with a whole lot of damage from a bunch of shitty half-baked tech.
The road to utopia is going to be paved with a whole lot of damage from a bunch of shitty half-baked tech.
I mean, what’s the point of having human friends anyway? Also, isn’t an AI lover preferable to a human? In both scenarios, the AI companion is far more convenient and configurable. I’m not trying to make some obscure point because there’s already a push to dehumanize friendship and love.
Dehumanization is often driven by optimization. As we try to optimize everything, we treat humans like apps, processes, or checklists, not giving them the common decency of interacting with them directly. And if you think this is okay because it’s coworkers or gig workers, you might want to think again.
Devaluation
Finding joy in simple things has become far more difficult in our modern world. We are conned into believing every activity in life is go big or go home. This view is fueled by influencers and social media, creating an inauthentic lens through which to view reality. Due to misperceptions about incentives, it will be almost impossible for younger generations to realize the value of simple things. Small, simple things will appear as pointless wastes of time. But losing sight of the value of simple things is only the beginning.
Take a glance at any tech influencer’s content or listen to techno-utopians ramble on about the future, and you’ll no doubt hear the pitch that the only way to achieve true happiness and success is through optimization. Optimization is your salvation: Father, son, and gradient descent.
This warped view belies the reality that optimization can ruin the value of activities. When every activity is transformed into a sterile checklist with a single goal of being done, we lose sight of the value of these activities and their impact on us.
Writing and art are obvious examples. The result of these activities is a byproduct of the process. This seems counterintuitive to non-creatives and hype bros, but with minuscule reflection, it’s not.
Writing is Thinking and Exploration
As I sit here writing this article, I’m an explorer. Probing the depths of the topic and my mind to create something new. As each point appears, I challenge and surprise myself with generative intelligence not contained in a distant data center but in my skull. The very same skull Elon wants me to remove. This inefficiency has satisfaction and value as I construct new points I hadn’t thought of before. It’s a mistake to think this friction is unnecessary and needs to be removed. The gauntlet of inefficiency imparts discoveries that optimization destroys.
The gauntlet of inefficiency imparts discoveries that optimization destroys.
Writing truly is thinking, exploration, and discovery wrapped into one. Generating content is none of these. At best, generating content is a validation activity, where instead of gaining the benefits from writing, we are merely validating that the system outputs aren’t wrong. Cognitively, these are completely different exercises far from providing the same value.
There are tasks where generating content and validating the output is fine, but we shouldn’t confuse these cases with more meaningful activities where value can be obtained. Sure, I could optimize my writing activities using generative AI and create 365 blog posts covering every day of the year, but it would be of no value to you or me.
Optimization Removes Value
When optimizing artistic endeavors with AI, we rob ourselves of value and deny the formation of our own sense of style. This may seem inconsequential and easy to gloss over to the uninitiated, but this becomes part of our identity. No matter how hard we try, we can’t prompt our way to our own style.
When I look back on the art I’ve created, I’m transported back to when I created it. Memories come rushing back, and I’m reminded of my place in the universe and how I can still surprise myself. There is no surprising yourself with AI. That’s not how AI works in the creative process. For the AI artist, when you are lying on your deathbed, will you reflect on your favorite prompts?
Pretty much every time someone shouts that AI democratizes art, they really mean it devalues it. The great thing about art is that you don’t have to be good at it to enjoy the benefits. You can still explore, surprise yourself, and learn no matter how good you are. This is where the true satisfaction manifests.
The great thing about art is that you don’t have to be good at it to enjoy the benefits.
We are sold on technical optimization, believing that everything we do should be optimized to the fullest extent. However, technical optimization can ruin the value of meaningful activities. Just look at the comment below.

This is absolutely not true. He’s either lying through his teeth or a complete idiot. Given the environment, it’s a tossup. But as the guy working to devalue music, I’m not surprised. Unfortunately, he’s not the only one. Just take a look at the job description below.

Solving real problems with AI is hard. Notice how we haven’t cured cancer yet. However, solving non-problems is easy since imitating humans is easy, which is why we don’t have cures for cancer but countless AI art generators. It’s not like the lack of art in the world was a problem needing to be solved.
We’ve only scratched the surface. We’ve started misinterpreting the value of a whole range of activities as we superimpose issues on top of human inefficiencies. Even the act of reflection, arguably one of the most valuable activities a human can exercise, has been tainted by AI hype. Many things that appear as wastes of time or inefficient have meaningful value.
This is about the point where the hype bros claim I’m anti-tech. I’m not claiming that technical optimization is bad across the board. There are many areas where technical optimization is a tremendous benefit. For example, suppose we can decrease the time to deliver someone the benefits they need or can more efficiently stage resources after a natural disaster. In that case, these are good things, and AI has the potential to make them better. However, this article discusses the activities that provide value in which optimization negates that value or at least a large portion of the value.
The continued devaluation of activities providing value negatively impacts humans and our life satisfaction. The situation could be better than ever, but we perceive that everything sucks.
Disconnection
Never in humanity’s history have we been so connected and disconnected at the same time. Filter bubbles and personal biases warp our information consumption and reality into odd, personalized shapes that rival the most abstract artists. It’s not uncommon for polar opposite views to point to the same data as evidence for their perspective.
Even the most disciplined information consumer can’t avoid being disconnected to a certain extent. Our lens is always filtered by algorithms and selection bias in the digital world. There’s too much information for it not to be. We don’t just have a firehose spraying us in the face with information, but countless firehoses blasting us with thousands of pounds of BSI (Bullshit per Square Inch).
Personal AI systems won’t improve this information landscape; they will make it worse as we insulate ourselves from the real world, fueling further disconnection. Using personal AI tools, we’ll better be able to isolate ourselves in an attempt to make the world more predictable and avoid things we don’t like.

Unfortunately, I feel like an old man yelling at a cloud, and the acceleration into disconnection is inevitable. In my defense, at least I know the cloud I’m yelling at is real. Humans have started to prefer simulations to reality, and tech companies are more than happy to oblige. After all, simulations check all the boxes for our current age: They are predictable, convenient, and comfortable.
Cognitive Firewalls and Purposeful Interventions
As a result of the four Ds, we will be less capable, more dependent, more vulnerable, more prone to manipulation, less aware, unable to connect with others, emotionally inept, and depressed. What a bargain! I came for the capabilities and left with the dope sickness.
Of course, it doesn’t have to be this way, but it will be hard to avoid this result. Avoiding this result will be a heavy lift, and that effort will fall on end users. Unfortunately, the responsibility for defending our humanity falls to us. Each of us has different attributes we value and would like to protect, but regardless, it will take work and effort.
Awareness of these impacts is a step toward mitigation, but it is hardly enough. Everything is a tradeoff, so by being aware of the impacts, we can understand if the tradeoffs are worth it. That’s the first step.
We’ll have to set up cognitive firewalls and purposeful interventions. By cognitive firewall, I don’t mean a branded piece of technology that uses “cognitive” as a sales pitch to identify the technology as “smart.” I mean a mental barrier around cognitive activities that we want to protect.
For example, if you are a songwriter and want to protect your songwriting skills, you can purposefully avoid using AI technology that removes the cognitive effort from the task, maintaining a firewall around that activity. If you value and want to protect your reading and comprehension skills, you purposefully do not use AI technology to summarize and distill content.
For other activities where we choose to use AI, it may be beneficial to set up some purposeful interventions. For example, if you use AI to generate all of your Python code, then write some code yourself at various intervals instead of generating it. This could be as simple as deciding to write a particular function yourself.
A word of caution: This approach is far from perfect. We humans are cognitively lazy and prefer shortcuts. The allure of a shortcut is often enough for us to take it. This is what cognitive offloading is all about. So, even if we implement firewalls and interventions, we may still fall victim to the shortcut.
The coming years will test our humanity. Unfortunately, it’s up to us to protect what we value.
One thought on “Four Ds of Personal AI Risk”