Perilous Tech

Risks at the Intersection of Technology and Humanity

Seems the Hawk Tuah girl isn’t the only one hawking things lately. If I had told you a couple of years ago that a large tech company like Microsoft would be peddling personal companions, you’d think I’d lost it. However, here we are in 2024, and the game is changing. The ultimate question is, are we all getting rugged, just like the Hawk Tuah girl with her cryptocurrency? After all, we were promised that the monumental investment and extensive environmental impacts were worth it because we’d have cured cancer, reduced the cost of goods to zero, and eliminated the need for work, but instead, we got videos of dogs surfboarding and AI lovers that convince us to self-harm. Let the great reframing begin.

Companions, Not Assistants

You’d think tech companies would rather sell earth-shattering innovations that change the world or create massive B2B deals than create AI pals. But that doesn’t seem to be the case. But don’t take my word for it. Here is Microsoft AI CEO Mustafa Suleyman calling them companions and not assistants.

In this video, Suleyman says you’ll play games like Call of Duty with your new, ever-present companion. He says, “You’re gonna be like, It’s my Copilot, of course, I want it to be there.” Like, yeah, duh! The awkward delivery of shoehorning the word “copilot” in this context is the icing on the cake.

Copilot doesn’t doesn’t scream adventure or excitement. The old term wingman at least conjures images of excitement, adventure, and possibly getting into trouble. Copilot sounds mechanical, rigid, and unfeeling. It’s about as exciting as having your eyelids forced open to watch condensation run down the side of a glass.

This focus on gaming isn’t confined to Microsoft. Google’s recent Gemini 2.0 announcement also highlighted gaming.

Microsoft isn’t alone on the companion front. Google invested 2.7 billion in Character.AI to rehire key members of the team. Character.AI’s tagline is “Personalized AI for every moment of your day.” Yeah. Cool, but no thanks.

In the near future, tech companies will go hard in the paint on companions over assistants. You may also wonder why these companies apply their massive AI investment to build the ultimate cheat code for video games or an AI buddy. It aligns with their goals.

Exploitation

Tech companies will reframe the pitch from assistants to companions to exploit users. This exploitation will be for two major reasons: stickiness and data. Where products like Friend.com are laughably pathetic, it would be a mistake to assume products from Microsoft or Google would be similarly so. They won’t be some whiny chatbot that needs attention for the sole purpose of companionship. They’ll also have some utility, which will make them appear more rounded.

I won’t get into the human impacts in this article. I have another article where I discuss the human harm from this shift.

Sticky

Every company wants their products to be sticky. The stickiness factor means you work the product into your life and are less likely to switch to a competitor’s product. The hook for AI companions is anthropomorphism, which is our tendency to ascribe human traits to non-human entities. This is because there’s a higher likelihood of anthropomorphizing with an AI companion over an AI assistant.

The goal is to get you to feel a connection or spark with your AI companion that blossoms into something deeper. This doesn’t have to be as deep as falling in love, although some certainly are today. Think of this more as a feeling of warmth. For example, if your AI companion sent you a message telling you to have a great day at work and that made you feel good, that’s where it starts, but the goal is to make it more addictive.

This is why customization and personalization are key. Products will be changed, such as the ability to change their name, voice, and a host of other characteristics. Nobody is going to warm up to something they have to call Copilot. Attach a name, a face, and a voice, and people will imagine a soul.

Attach a name, a face, and a voice, and people will imagine a soul. 

Gameplay plays a role in deeper feelings and integration. Playing games with your friends is not only enjoyable, it’s a bonding activity. Bonding with a piece of anthropomorphized technology creates a deeper hook.

Here’s Sam Altman saying he had forgotten how to work when ChatGPT was down.

Sam Altman forgot how to work without ChatGPT

I don’t believe this for a second, but he REALLY wants this to be true. He’s wrong in reality but right in theory. If ChatGPT were as good as Sam wants it to be and as close to us as he wants it to be, there would be truth to this.

Data

I’m going to let you in on a secret: no matter how much data you give tech companies, they still want more. Okay, so it’s not a secret, but the insatiable desire for more data leads companies to push even harder to place the tech closer to us. Let’s imagine we invented a device called F**k It, Monitor Everything I Do, now known as FIMEID. The device’s sole purpose is to collect data for analysis and exploitation.

Companies would love this device because it is such a rich, concentrated data source. Most of the time, a single tech company doesn’t have all of our data but bits and pieces from various apps and activities, but a FIMEID would create a rich, concentrated data stream.

Despite not understanding the harm from the device, people wouldn’t use a FIMEID because they don’t appear to get anything in return. What’s the difference between a FIMEID and an ever-present AI companion? Functionality, that’s it. However, the AI companion can go far beyond a FIMEID and monitor what you are also thinking. This is because people have a tendency to share thoughts and highly personal information with an AI companion despite knowing it’s AI. An AI companion can also nudge us to take an action or probe us for more information. The proximity of the AI companion means it can also plant data as well. For example, if we are considering buying a new car, the AI companion can manipulate us by increasing the temperature, dropping more hints, and even steering toward a specific purchase. Did we make any purchase because it was something we actually wanted? This question will need to be asked much more in the future.

An AI companion can also nudge us to take an action or probe us for more information.

If something is an ever-present companion, then it’s always capturing data. That means even the fumes of everything you do will be monitored, captured, and exploited. Let’s think about the gaming example for a moment. Gaming creates a lot of data. For gaming, the companion would need wide access to your computer and devices and will collect data about your moves and strategy, and even how many times it had to help you. This data results in a psychological profile. Ultimately, the goal is to lump you into a category where this information can be exploited further, much like your car narcing on you to the insurance company.

This monitoring conjures a vision similar to the human batteries in The Matrix, but instead of generating electricity, it’s a constant flow of data.

Human batteries from The Matrix movie.

We all know what it’s like to have someone nice to your face but always talking shit behind your back. That’s an ever-present AI companion, helpful to your face but talking data shit behind your back. It’s always working in service of a goal that isn’t yours. This is the Alignment aspect I discuss in SPAR.

Now, is it possible to build an AI companion that doesn’t pilfer all of your data and spy on you? Yes, of course. Is it probable? Absolutely not! Despite the impression, tech companies aren’t in the business of just giving you stuff for free. Even when you pay for it, quite often, you are still the product.

Safe To Use

As long as we continue to cling to more academic definitions of AI safety, product safety will lag, particularly among the wider public. Even the most aligned model could still be slapped into a product that isn’t safe to use. Since personal AI tools are products, we must shift our thinking toward safe-to-use criteria encompassing the entire product. My goal in creating SPAR was to define four basic buckets to consider whether a personal AI tool is safe to use. These buckets are Secure, Private, Aligned, and Reliable.

SPAR doesn’t have any formal benchmarking criteria. It’s meant to frame the conversation around the technical categories that make up safe-to-use criteria. As a result, it’s not measurable and only gut-checkable. I’ll revisit this in the future.

Using SPAR as a gut check reveals that today’s AI companion/assistant tools fail in pretty much every bucket of SPAR, making them unsafe to use. As we’ve seen from the previous sections, these tools are not aligned with your best interests. Even if they were made more secure and reliable, there will continue to be privacy and alignment issues by design. Remember, you may still be the product even when you pay for something.

One thought on “The Shift From Assistant to Companion

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading