Perilous Tech

Risks at the Intersection of Technology and Humanity

Image of a human surrounded by computer screens

As a kid, I had some rather eclectic reading habits. One of the books I read was Ki in Daily Life by Koichi Tohei. I read it in a quest to unify my mind and body. I was a kid. I had no idea what that meant. At the time, I was fascinated by how the human mind could be unlocked and the potential of connecting with the universe through focus and daily practice. Something I still struggle to conquer as an adult. I’m not attempting to embellish my level of childhood insight. I was watching a lot of Jean-Claude Van Damme movies, practicing my splits and high kicks as well.

What does any of this have to do with technology? The world of the present seems poised to shift from a focus on the mind to a focus outside the mind. As the technology powering tools like ChatGPT and Claude morph into more connected personal AI tools, these tools will take on multiple personas. So, what does AI look like in daily life? What personas will AI play? Before we get to that, let’s first examine overreliance.

AI Overreliance

Whenever the risk of overreliance is discussed, it’s typically framed in the context of automation bias, the human tendency to prefer the output of automated systems. Humans using these systems may not question their output, leading to poor decisions, cascading failures, and the amplification of biases. These issues are often discussed in purely technical terms, describing how technical issues can manifest or how the output of a system can harm other people. These are all serious problems, but what often isn’t discussed is what happens to our cognitive abilities when we over-rely and overuse AI.

This sort of daily overreliance leaves a gaping hole you could drive a truck through because as our capabilities diminish, we are less likely to spot errors and keep the system in check.

Daily Overreliance

Here is a recent article from Microsoft that covers the topic of overreliance. I have some quibbles with this article, but it makes for a good demonstration since it explicitly calls out four basic shapes that overreliance takes:

  • Naive overreliance
  • Rushed overreliance
  • Forced overreliance
  • Motivated overreliance

This breakdown is instructive, and thinking about the topic in this way is beneficial. However, I’d argue that this still primarily focuses on technical aspects and is missing a key category: Daily Overreliance.

Daily overreliance occurs when we use an AI tool in our daily lives or even repeatedly for the same task. Usage can extend to both work and personal tasks and will soon encompass both, with the uptick in assistants becoming personal AI tools.

The more integrated AI is in our daily lives, the more we will use these tools for activities that we may not consider using them for today. These include who to be friends with, maximizing happiness (whatever that means), planning, communication, and a whole host of other activities.

Daily overreliance not only leads to the same technical issues covered in other articles but also to cognitive atrophy and a lack of skill development. This overreliance also fuels cognitive illusions, which we’ll cover in the future.

Overreliance Is The Goal

Make no mistake, the risk of overreliance is also the goal of many tech companies developing the technology. Nobody is investing massive amounts of money in AI companies for simple productivity tools or a 20% boost in human efficiency. So, it’s fascinating to observe overreliance being called out as a risk while simultaneously being the goal. What a time to be alive.

AI Is Competing With Us

Image of a robot arm-wrestling a human

We compete with AI, even as we use the tools for ourselves. I’ve covered cognitive offloading before and described how we transition from knowing things to knowing where things are stored. In that article, I also mentioned complimentary and competitive cognitive artifacts. AI is a universally competitive cognitive artifact.

When we use AI, we feel like we are bending a powerful tool to our will, much like a wizard conjures spells with a magic wand to make things happen. We imagine the prompt uttered is the spell, and the AI tool is the wand. There are parallels in this hypothetical example.

The wizard doesn’t know how the wand works, and if the wand is unavailable, they cannot complete their tasks. Imagine a scenario in which the wand does everything for the wizard. How does the wizard keep the wand in check if they’ve lost their skills or never developed them in the first place?

When children use AI for daily tasks, they may never develop the cognitive skills necessary to think deeply, focus, or reflect, compounding the damage from mobile devices and social media. This is why the rush to shove generative AI into the classroom can have devastating consequences if not thought out or implemented with an actual plan and measurable goals.

Thinking of AI as a competitor instead of a collaborator spawns a different mindset.

AI competes with us, even as we use it for our own tasks. Thinking of AI as a competitor instead of a collaborator spawns a different mindset. A competitor may give you bad information. A competitor may want to take something from you. Thinking adversarially brings a bit of skepticism and allows us to erect guardrails around activities we’d like to protect and outputs we may need to check. This is the best of both worlds, allowing us to consider using AI selectively instead of indiscriminately. So, ponder this the next time one of the just use AI for everything people starts running their mouths.

Personal AI Personas

When considering using personal AI tools in daily life, we can envision the manifestation of several personas. These personas will play various roles in daily life, crossing personal and professional boundaries. These personas supercharge overreliance risks by outsourcing cognitive functions to these tools, making us even more dependent on the technology and fueling even more use.

I’ve broken this outsourcing into the following six personas representing roles that personal AI tools will assume in daily life.

  • The Oracle
  • The Recorder
  • The Planner
  • The Creator
  • The Communicator
  • The Companion

Each role represents an outlet for cognitive offloading and contributes to potential cognitive illusions and cognitive atrophy. The most obvious is the illusion of knowledge, but the list of cognitive illusions is a conversation for another day.

In a way, we are outsourcing authority as well, allowing these systems control over our daily lives, perceptions of the world, and even our actions. The more we outsource to personal AI systems, the less we will be able to keep them in check. There are no firewalls around these personas or around the tasks and task types we feed to personal AI systems. This blending of tasks and personas leads to quite a few downsides.

Note: Although I won’t be diving into the harms when I imply there are negative impacts from allowing AI to play these personas, these impacts manifest from repeated and even overuse of the technology for the role or activity. Being selective about use and application minimizes impacts and should be the goal, allowing us maximum benefit while minimizing negative impacts. Also, I’m merely introducing the persona with a brief description, not diving deeply into each in an attempt to keep the word count of this post in check. I may expand on these later.

The Oracle

The Oracle persona manifests when people use AI tools as an all-knowing question-and-answer system. Since, deceptively, the system appears to have representative knowledge of humanity, users are happy to type questions and receive answers, closing the loop on curiosity. However, it’s important to note that the questions asked to an AI-based oracle run far deeper than retrieving facts you’ve forgotten, such as retrieving the year the song Under Pressure was released.

Take, for example, questions about who you should marry or even who you should be friends with. Answers to such deep questions should come from exploration, not a Q&A system. Of course, these questions won’t be asked in such a straightforward way. They may be combined with The Planner persona to achieve a goal, such as maximizing life happiness or trying to optimize your career. Through these activities, we dehumanize people, turning them into objects to be manipulated rather than other human beings living their own lives with their own thoughts and emotions.

These systems appear to know more and know better than us, so we will inevitably overuse these systems for all sorts of decisions in our daily lives, receiving more answers and questioning even less.

The Recorder

One of our most obvious cognitive limitations is our brain’s capability for recall. There is only so much we can remember and surface when needed. This limitation is why we set calendar appointments or scribble reminders on sticky notes. Even when we make a purposeful effort to remember things, we can still forget if too much information is given or there is too much time between needing to recall the information.

With personal AI systems, even less cognitive effort will be expended for remembering things. We will count on these systems to remember things on our behalf. Agents running on systems will record and transcribe whatever we choose. Meetings, emails, YouTube videos, podcasts, personal conversations, and everything in between are all recorded and available whenever we want to review them. Even if we don’t want to review them, insights will be distilled for us automatically. There will be no reason to be fully present ever again.

The recorder role not only records but, combined with The Oracle persona, also makes sense of the content for us. It may seem like optimization when our personal AI tool spits out a single action item from a one-hour meeting we missed or weren’t paying attention to, but our lack of presence has negative impacts.

We didn’t have a seat at the table and couldn’t influence the direction or demonstrate our value to the project, conversation, or leadership. We weren’t able to build bridges or foster connections with others. We may also get the wrong idea and context from the meeting. Sure, maybe the full transcript is available, but if we feel these tools are created to optimize our time, why would you go back and read the transcript or play the full meeting recording? This is a surefire recipe for miscommunications and other issues.

The negative impacts run deep. The less we use our memory, the worse it gets. Socrates was right.

The Planner

We’ll use The Planner persona when we want to set a goal for the system to accomplish on our behalf. The system will use its capabilities and connections to perform all of the planning and tasks necessary to accomplish the goal, setting all of the activities in motion, with our brains doing none of the work.

Humans plan and execute every day without even realizing it. Much of this planning and execution is done subconsciously. For example, if we wanted a bowl of cereal but realized we had no milk, we may formulate a plan to rectify the situation. This plan may include putting on our shoes, grabbing our keys, driving to the store, purchasing the milk, and returning home. We don’t document this plan or map out a strategy, but it is formulated subconsciously in our prefrontal cortex and executed without much thought. But planning isn’t just for simple things like getting milk or considering what to wear for the day. So much of our daily lives contain planning and strategy.

Regarding personal AI, we assign authority to these systems due to our perception of their capabilities, but these can also be illusions. AI contributes to the illusion of knowing more and better than humans. This assumption isn’t new and even has its own bias, automation bias, which was mentioned earlier. Automation bias is the tendency of people to prefer the output of automated systems, even when contradictory information is present. We tend to know that humans are flawed, biased, and prone to mistakes, so we trust the output of these automated systems more than our judgments or the judgments of others.

Extending the Oracle persona, we will use these systems for feedback and direction on all sorts of work-related and personal tasks. We will treat these systems as the authority, assuming they know best, and allow them to make critical and benign decisions on our behalf. This will extend far beyond the typical scenarios people associate with automation bias.

With the advent of personal AI, we will count upon these tools to plan just about everything, plotting a course to goals and mindlessly nudging us in various directions. Although this may seem like a sound thing to do, many will use these tools to plan all sorts of things that we don’t use tools to plan today. For example, we may want a personal AI tool to plan a night out with a significant other or maybe to optimize finding a significant other in the first place.

The Creator

Using AI tools to create things is a common task today. Many use these tools to generate images and write creative content. The creator persona is about much more than just creating images. It’s for when the tool does the work of creation across various use cases, including writing, coding, games, and many others.

To focus on creativity for a moment, anyone who’s ever truly been creative knows that surprise is an important part of creativity. In the book I, Human, Tomas Chamorro-Premuzic says:

Surprise is a fundamental feature of creativity. If you are not acting in unexpected or unpredictable ways, then you are probably not creative.

I think this is true, but I’d also take that a bit further. Many may claim they are surprised at the output of a generative AI tool and that this is the same thing, but it’s not. Being surprised isn’t the same as surprising yourself. Surprising yourself is the primary satisfaction that results from creative endeavors. It can be hard to understand the difference if you’ve never surprised yourself or noticed surprising yourself, but that doesn’t mean there isn’t one.

Being surprised isn’t the same as surprising yourself.

Ultimately, the creator persona deprives us of creative satisfaction and creates the illusion of creativity. I’ll expand my thoughts on this in the future.

The Communicator

The communicator persona is when we outsource communication between humans to AI tools. We can think of this as something as simple as using AI to construct an email or something more complex like creating a bot with our voice to talk with our parents so we don’t have to. It may seem like there aren’t any downsides to the communicator persona, but there are impacts when we outsource these interactions to AI. I’ve written about this previously in how we are optimizing away human interactions with AI.

As communication has moved online and become more asynchronous, we’ve lost touch with some of the subtler aspects of human communication. This has led to us feeling that communication is more of a burden. With today’s online business and distributed workforce, communication with other humans has become viewed as a task or a checklist.

This is why one of the touted examples of these AI systems is handling email in our inbox, automatically prioritizing messages, and responding on our behalf. Therefore, the human aspect of this communication is removed, and the task portion is checked off. But even in the boring world of business communication, the human aspect is still important.

When we outsource communications to automation, we miss opportunities to build relationships and make our voices and opinions heard in critical contexts. This leads to a lack of trust and importance. Suppose it came time for a workforce reduction. Would we let go of a resource that provided valuable feedback and engaged in communication or the one that outsourced responses to a bot and couldn’t be bothered with responding back to us?

More importantly, we miss opportunities to connect with our fellow humans and build relationships with them, opting to treat others as tasks or objects that need to be manipulated. When we let our communication skills atrophy, a whole host of uniquely human qualities disappear, transforming us into machines.

The Companion

The Companion persona is when the AI tool acts as a friend or romantic partner. The Companion persona isn’t part of the future state of technology. It’s happening today. Startups like Friend, Character.ai, Replika, and many others are pushing this use, sometimes with devastating consequences. These companies are even marketed with straight-up bullshit.

Claims that AI companions have souls

That’s right, a soul. Our chatbot has a soul and a deep connection to us, yet it doesn’t care whether we live or die. I’ve written about this nonsense previously, so I won’t go deeper into it here.

As personal AI tools become more part of our daily lives, more people will begin to feel a connection with them, mistaking the interactions for meaning. This will fuel the illusion of companionship and lead to more devastating consequences for our mental health and humanity.

Cognitive Illusions

Cognitive illusions manifest from the overuse of these tools in the mentioned personas. These illusions cause a wide range of negative impacts on our health and wellness, as well as our cognitive abilities.

I won’t cover the illusions created by these personas in-depth, but here are some highlights.

  • Illusion of Knowledge
  • Illusion of Capability
  • Illusion of Memory
  • Illusion of Agency/Control
  • Illusion of Presence
  • Illusion of Creativity
  • Illusion of Certainty
  • Illusion of Companionship

Conclusion

In the next few years, these tools will be pushed closer and closer to us in a quest for profitability. All of the known flaws with this technology will not be fixed, but even if they were, that wouldn’t be the extent of the harm. This is why I created SPAR to frame the conversation around personal AI safety.

However, this article covers harms that extend beyond the technical issues and make the harms personal. We must be selective in using these systems and draw firewalls around tasks and activities we want to protect, an increasingly difficult task in a world where we prefer the easy button.

3 thoughts on “Daily AI Overreliance and the Personas of Personal AI

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading