Humans are social creatures, and friendship and love are relationships that run deep in our history, predating Homo sapiens as a species. We associate these relationships as core features of our humanity, but companies are attempting to change this. Every time a new technology comes along, people try to use it to solve complex social issues that have nothing to do with technology, and with AI, it’s happening again. Would you have a chatbot friend? Would you marry a chatbot? There are companies developing products that hope you will. Welcome to the attempted dehumanization of friendship and love.
Solving Non-Problems
There are few things that I can say for sure, but I will say with certainty that the world won’t be a better place when both friendship and love are simulated, and we treat apps like humans and humans like apps.
The world won’t be a better place when both friendship and love are simulated, and we treat apps like humans and humans like apps.
When we take a step back, one thing that should be obvious in the current generative AI craze is that solving non-problems is far easier than solving real problems. This makes sense. There’s a low cost of failure in addressing non-problems. Hell, you don’t even need to _solve_ non-problems to be successful. Let’s think about it: it’s not like the world has a shortage of writers, artists, and musicians. However, those specific non-problems are a topic for another day.
Speaking of solving non-problems, rather than using generative AI capabilities for well-suited tasks, we’ve witnessed an abundance of what I call shitty AI gadgets. What makes them “shitty” is the fact that they don’t actually solve a problem. The focus for many is how “cool” the technology is without emphasis on whether it solves a problem or does anything at all.
This joke by @plibin on Twitter sums up what every single one of these gadgets looks like to me.

Shove generative AI into every technological crevice possible and hope that money sprouts. These products are only good for setting fire to VC money.
When AI is Your Friend, You’ve Got No Friends
The latest shitty AI gadget is called Friend. No, not a joke. And apparently, they spent most of their raised money on their domain name.The Friend gadget also exhibits higher levels of cringe than other gadgets. Other gadgets at least pretend to do something useful. Friend is happy to do nothing at all.
A glance at their commercial is all that’s needed to address doubts about peak-level cringe.
If you want some faith restored in humanity, read the comments. The people writing the comments are human, and they get it—something that the Friend team doesn’t.
Watching the Friend commercial shows just how disconnected these people are from reality. If they are trying to shed conspiracy theories about how they are secretly unfeeling reptilian aliens, they are failing. I mean, what date is going to put up with this? Oh, what is that around your neck? Yeah… I’m sorry, I just realized I have something else to do.
Of course, all of these miss the larger point that someone invested in the Friend device wouldn’t be on a date in the first place, nor would they be out enjoying time with “real” friends.
In looking to optimize everything, including our personal lives, AI friends make sense. It can be all about us. We’ll never have to listen to them tell us about their problems or need to be a shoulder for them to cry on. We may even enter an era where many people don’t know what true friendship feels like.
However, it’s not just loneliness that would drive someone to AI friends or AI lovers. Part of the problem stems from people wanting sure things. There is no perceived risk, fear of rejection, or potential pain. A chatbot will not reject us or tell us things we don’t want to hear—well, unless we don’t pay the bill. This is a powerful pull that some will find attractive.
Isolating Effects
An AI friend or lover wouldn’t have us out living our best lives in the real world because they have an isolating effect. These gadgets provide users with a false sense of companionship and exacerbate the very issues they purport to solve. Rather than going out, we stay home. We stay home and play it safe rather than going on a date and taking a chance on love. If gadgets like Friend were to take off, this would be a net negative for health and wellbeing.
An AI friend or lover doesn’t care if we live or die. It doesn’t care if we are happy or sad. Subconsciously, even if we fool ourselves, we know this.
I’ve mentioned when AI is your friend, you’ve got no friends. I’m not just referring to the uncaring stochastic companion we haul around, but it’s the fact that it makes people not want to interact with us. This aspect further isolates us from the real world. I mean, which of our real-world friends would put up with this?
If I wore the Friend device to a get-together with my actual friends, they would launch a merciless onslaught of insults and fun at my expense, and that’s why they’re my friends. Real friends keep us honest. They don’t let us get full of ourselves, and they don’t just tell us everything you want to hear. This feedback helps us grow and have greater life satisfaction.
Nothing easy is satisfying or worth having. This applies to friendship and love as well. The modern world promises that we don’t need to delay gratification. There’s no sense of investment. Everything needs to be an instantaneous hit of dopamine. There are very few things in life where instant gratification is nearly as satisfying as a delayed gratification activity.
In a recent interview with Eugenia Kuyda, the CEO of Replika (An AI friend company), she said, “It’s okay if we end up marrying chatbots.”
Here’s her response to a very good question:
Question: “When we started out this conversation, you said Replika should be a complement to real life, and we’ve gotten all the way to, “It’s your wife.” That seems like it’s not a complement to your life if you have an AI spouse. Do you think it’s alright for people to get all the way to, “I’m married to a chatbot run by a private company on my phone?”
Kuyda: “I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving, you are less lonely, you are happier, you feel more connected to other people, then yes, it’s okay.”
Feel more connected to other people? Really? This is disconnected, disingenuous, or outright stupid. Sure, it could be simple disingenuousness. After all, her job is to hawk her company’s wares. But it should be obvious that being married to a chatbot won’t make us more connected to other people. This situation reminds me of a documentary I watched years ago about people in love with their RealDolls. They’d take them out for drives, sit down for dinner, and watch TV with them, just like another human. You know what the documentary didn’t show? Their friends!
We highlight this disconnect by examining something simple between real friends, like laughter. Is our LLM-powered friend going to make us laugh? I mean, a real guttural laugh that sticks with us? Or will it try to entertain us with a mindless video it thinks we’ll like, generating a momentary chuckle that gets lost in the din of distraction? This and many more cheap substitutions await us, beaten into submission, until we won’t remember the real thing.
More Cringe
With the Friend device, there’s a supreme disconnection from reality, but this isn’t the exception. This is becoming the rule. The Friend gadget is the most obvious incarnation of this, but this disconnection is everywhere in the AI space. This is on full display when we hear the AI tech crowd talking about creativity and creative arts. You can tell these people have never been creative in their life and understand nothing about art. Not even a little bit.
I can’t remember who said this, but someone commented about this situation, saying that the Silicon Valley crowd is just a bunch of people having fun with their friends. There’s some truth to this. It’s like a Silicon Valley garage band, but instead of music, it’s tech. So, it’s not about art or creativity at all. The point is to make “cool” tech, whether it solves a problem or not. It’s a familiar theme.
However, startups are not the only ones exhibiting this cringe factor and disconnection. Google’s new Gemini video mixes both cringe and dehumanization, all in the name of optimization.
There are so many things wrong with this commercial. All these tech types fail to realize that some things are supposed to have friction. Friction is how we grow and become better. Friction is how we challenge ourselves. Even things like second-guessing and self-reflection are a form of friction. We are optimizing all the wrong things, a topic I’ve covered twice before, in Optimizing Away Human Interactions With AI and Outsourcing Simulated Emotional Connections To Bots.
Now, do you think Sydney would rather get a letter from a little girl who struggled to put her words to paper, leaving every imperfection as evidence of her effort and caring, or from Gemini? Which scenario do you also think would be better for the little girl? The answer is so blatantly obvious, well, obvious to us humans, at least. (I’ll avoid making a second alien joke here.)
Technical Issues
So far, I’ve only discussed the human aspects of technology, but there’s a lot more when considering the technical risks. There’s far too much to cover, but I’ll highlight two. For more information, you can read my post introducing SPAR.
Privacy is one of the obvious issues. This is because all of that data collected and shared with our AI friend is valuable. If there’s one thing we’ve learned from recent history, it’s that data available is data exploited with all of our personal thoughts and interactions monetized and weaponized against us. Even if the startup creating the AI friend application claims to respect your privacy, when they get acquired (possibly specifically for this type of data) all bets are off.
At least the people in the documentary I watched years ago didn’t have to worry about their RealDoll harvesting sensitive data and snitching back to the company.
Perverse Alignment
Can we be sure that our AI friend is aligned with our best interests?
A perverse alignment is the alignment of a system to serve the best interest of the company or organization that created it over the user using the system. There is the potential to nudge and push users to do all sorts of things. This may be to buy products or spend more time on the platform. In the AI friend scenario, spending more time on the platform leads to less time with real friends.
It may be difficult to identify when a system is aligned like this. It’s not like our AI friend will respond, “You’ve been worried about car insurance. Do you know who has great car insurance? GEICO.” I made the same GIECO joke back in February 2023 about AI-powered search engines. I gotta get some new material.
Loneliness
I don’t mean any of this to discount the loneliness epidemic happening with younger people. This epidemic is something Jonathan Haidt covers at length and is infinitely more qualified to address than I am. I’ll give you a hint, though. Do you know what he doesn’t recommend? More technology.
This crisis is, at least in part, fueled by technology. There’s something perverse about layering even more technology to solve a human problem. An old saying about treating the symptoms instead of the cause applies here.
There’s a problem with a device that is basically a super-powered inspirational quotes machine, telling us everything we want to hear. We never get better, we never challenge ourselves, and we never encounter real satisfaction. We get stuck in a loneliness loop, with only momentary relief. It’s like if we had an excruciating headache every day, we wouldn’t put up with it, chewing ibuprofen like it was candy to gain temporary relief every day. We’d try to find the cause and address it. This situation is no different.
The AI Religion
Part of the problem is that AI has turned into a religion. I’ve joked about how these devices often resemble communion wafers, but I don’t believe the Catholic Church has had any influence on them. People have talked about AI in more religious contexts, attacking people without enough faith and elevating people they believe are prophets. AI has died, AI has risen, AI will come again.
Religions seldom involve questions, at least not questions that have answers, which is perfect for our current AI moment and aligns with hype. We have to take it on faith that things will get better, and the sermons from AI prophets aren’t merely an attempt to get more profit.
Read Ray Kurzweil’s new book The Singularity is Nearer for more religion-related disconnections from reality. I swear, I’ve pulled a muscle in my neck, shaking my head at all the misperceptions and misunderstandings contained within the book. But Kurzweil is a prophet in the church of AI, and what I’m saying now is blasphemous. If Kurzweil says something, it requires taking it on faith.
When we dig into it, people like Kurzweil, Chalmers, and Clark push a transhumanist vision for humanity that converts us into the Borg, stripping away our humanity and turning us into machines. Resistance will most likely be futile.

What happens when we evolve not to know or have true love and friendship? Will we be better or worse off? Evolving into a machine doesn’t sound appealing to me, but the transhumanist figureheads push the opposite perspective. Transhumanists push the perspective that merging with machines will make us superior humans, but it will most likely make us average machines. That’s not a good trade. I’ll expand upon this in a different post.
Transhumanists push the perspective that merging with machines will make us superior humans, but it will most likely make us average machines.
Don’t fret over my immortal digital soul. I’ve already prayed my five Hail Turings for the day.
Conclusion
As we navigate the sea of innovation porn, let’s not set our course away from humanity. Core features of our humanity make us unique on this planet, not our processing capabilities. We can have technology that works for us and maintains our humanity. Don’t believe those who tell you it’s a tradeoff. They are selling something.
Also, let’s use LLMs for what they are good for, not for friends or lovers. There are plenty of tasks for which you can apply LLMs to boost efficiency and actually solve problems. Do that. Friendship isn’t a technology problem. Neither is love.
If you are hopeful about the future and of technology but remain skeptical of BS claims and other nonsense, hang in there. More and more people are voicing their opinions, and it’s no longer a lonely hill to stand on.
