Perilous Tech

Risks at the Intersection of Technology and Humanity

Something that may have gone unnoticed in recent months is that there has been a brewing backlash against physical AI-powered devices. The most recent example of this was the crowd attacking the Waymo vehicle in San Francisco, but this is far from the only example.

Article about the destruction of a self driving car

I started noticing this trend with AI-powered food delivery vehicles. This was a bit odd since they seemed to be pretty good at failing themselves, often found tipped over, going the wrong direction, and even, in one case, committing a hit-and-run.

Article about a robot hit and run

Although many would write this cheering crowd off, envisioning Enoch in the hands of the destroyers, this would be a mistake. Those of us working in tech can roll our eyes and dismiss outlandish reporting and other nonsense but think about the countless people who have a steady stream of this clogging their newsfeeds. There is an underlying mood to these activities that cuts much deeper than tech hate or the realities of technology innovations from the past. In this post, we explore what’s bubbling beneath the surface.

Quick Update

I haven’t had much time to write content lately. This isn’t for lack of material since there’s been a firehose of things to write about. I’ve been working on something much, much longer than blog posts, so that’s consumed quite a bit of my time. Also, you can check out my post on something I call the AI Solutions Risk Gap on the Modern CISO blog. Here, I break down what really matters with AI Risk and give leaders some topics for consideration.

Why The Backlash?

I believe what we see here is the beginning of something bubbling to the surface. This is the inevitable outcome when hype meets uncertainty. AI hype is putting all of humanity on notice, and humanity notices.

AI hype is putting all of humanity on notice, and humanity notices.

So, you may wonder why attack self-driving cars or delivery robots. It’s because these are the physical manifestations of AI in the real world. After all, it’s kind of hard to punch ChatGPT in the face. These devices represent symbols of a future that doesn’t need humans at all, erasing humanity from the equation. It’s a mistake to attribute this to Luddism or tech hate.

First of all, anyone invoking the Luddites should read Brian Merchant’s book Blood in the Machine. Second, the Luddites’ concerns about tech affected their industry and the social and political environment surrounding it. It’s an entirely different scenario when discussing a large swath of humanity in multiple industry verticals and positions. Unfortunately, this uncertainty is primarily driven by exaggerated news reports and speculation.

Examples like this gem from Yahoo Finance that shows few cuts attributed to AI but speculates that people are lying despite direct quotes to the contrary. There are also framing issues and more to critique in the article, but most people won’t notice any of this. All they’ll see is, once again, AI is coming for your job.

People have a sense that the technology isn’t as good as it claims to be and yet continually see reporting to the contrary. They also see the launch of what I refer to as shitty AI gadgets, like the Humane Pin and the Rabbit, two devices that apparently investors and the media love, but the rest of the world, not so much.

AI is now being shoved down our throats in absolutely everything, whether we want it or not. Even Mozilla is scaling back to focus on adding AI to beloved Firefox, despite the fact that absolutely no Firefox user actually wants it. Microsoft is cramming it into every corner of the Windows operating system. Deep AI integration is bad for both security and privacy and despite this being known, the push continues. Nothing is sacred anymore.

To pile on, people are being told their jobs are in danger, which they are, but not from super capable AI, but from overzealous business leaders who hope that the tech catches up faster than they’ll have to backfill positions or rehire the people they let go. This is despite underwhelming performance when they do demo products or launch experiments.

There is no doubt that, in general, AI technologies will continue to make progress, solve problems, and become more capable. We will even get to AGI. But in the short term, we are being sold a bill of goods before these companies even get the technology working, much less working effectively. It’s like thinking you are buying a Ferrari, but when you take delivery, it’s a wooden go-cart with wet paint and the word Ferrari on the back.

It’s like thinking you are buying a Ferrari, but when you take delivery, it’s a wooden go-cart with wet paint and the word Ferrari on the back.

You even have people like Sam Altman telling people ChatGPT will evolve in uncomfortable ways, wanting to push this technology further into your personal life with far more access to your data. No wonder people protested outside the OpenAI’s office. Give us more of your data so we can replace—I mean, help you. The reality may not be so cut and dry, but that’s what’s in people’s heads, and they don’t like it.

Tech companies hope to employ their standard brute force playbook and steamroll through the problems, but I think it’s far more challenging this time. The AI field, in general, will bring us a lot of advancements. LLMs are undoubtedly useful for some tasks but remain overhyped and won’t get us to AGI. LLMs are the Diet Coke of AGI. Just one calorie is not nearly enough.

Human Manipulation May Win The Day

If all this wasn’t depressing enough, if there’s one thing we know for certain, it’s that humans are easily manipulated. We can reliably reproduce these results. Companies will start to employ more manipulation techniques to avoid larger issues and ease adoption.

These can be subtle and often go unnoticed. For example, have you noticed how the responses are displayed while using ChatGPT? They are completed across the screen as though someone else is typing back to you. This makes it feel more human.

I remember reading an article years ago about home assistant robots that were in development and how people didn’t like them. Then, the developers projected simple facial expressions on the robot’s face, and people warmed up to them. They were the same product that now had a simple face, with no further capabilities added.

To take this further, look at the image I used as the featured image for this post. It may make you feel sorry for the robot despite being imaginary and completely manufactured. The robot never existed, the human never existed, and the scenario didn’t exist. Yet, we still can’t help feeling sorry for the poor robot despite the fact it may have been a homicidal, mass-murdering robot whose sole purpose was to kill as many people as possible.

So, if we apply subtle manipulation to the current situation, imagine the delivery robot having a statement printed on it that says, “If you see me in trouble, please help me.” This is a statement from a piece of technology asking you, the human, for help. Since most people help when asked, people may be likely to stand up a tipped-over device and less likely to kick or destroy a device requesting help.

Or a wilder scenario, projecting a frowny face on the windows when a car is attacked and a voice that says, “Stop, you’re hurting me.” These techniques may reduce the number of incidents by manipulating the humans coming in contact with the technology through techno-social engineering.

Our world is already filled with priming, subtle manipulations, and nudges

Our world is already filled with priming, subtle manipulations, and nudges. Companies building this technology won’t find ways to make the situation more equitable for humans; honestly, that’s not their job. However, they will find ways to manipulate us into believing it’s in our best interest, ease adoption, and minimize backlash. Anthropomorphism and other human manipulation techniques will be employed to serve the company’s goals. On the other hand, this is something we should all be concerned about.

AI man makes claims

One example of manipulation is this article. Notice the mental tricks Sam Altman employs. By claiming AI is dangerous in this way, he’s creating a humble brag about its incredible capability. Claiming to want regulation makes him appear reasonable and concerned. He gets to play the hero and the victim at the same time. It’s a lot less genuine when you realize this is a push toward regulatory capture. I’m sure there’s an Onion article in here, somewhere like, Man Creating AI Says It’s Dangerous And Wishes There Was A Way to Stop Himself.

Business Leaders

Business leaders play a critical role here and need to be more critical of claimed advances in the AI space. When putting pressure on internal developers, they need to understand that the biggest companies in the world are struggling with operationalizing generative AI. So it’s reasonable to assume you’ll have challenges as well.

Business leaders also need to be far more critical of vendor AI claims. Keep in mind that demos are staged and offer known variables for vendors to present during sales meetings. These situations don’t match your organization and the unique data and challenges you’ll encounter. When evaluating a demo, ensure that it’s evaluated on your data with problems that you encounter. Also, ask the vendor about challenges you’ll have, as well as things that their tooling doesn’t do well. If you don’t get good answers, run as fast as you can in the opposite direction.

Common things I hear are, “Why would make up stuff about their products?” This is typically when I spit my drink out. Dig in and verify claims. Just because a product may work in one environment doesn’t mean it will work in yours.

Conclusion

Although we all love to rage against the machine, the problem is that we are all a part of it. In the near future, we’ll start to see more applications of techno-social engineering. We also need to be far more critical of the news stories we consume. There’s a deluge of junk research and sensational news stories out there. Staying level-headed and asking the right questions can help keep you grounded on where the realities are.

Update

I wrote this article before seeing Brian Merchant’s article. You can read that here: https://www.bloodinthemachine.com/p/torching-the-google-car-why-the-growing He digs a bit deeper into the self-driving vehicle aspect, so we have a similar theme but a different focus. It’s well worth the read. Also, I learned from that article that people were destroying e-scooters as well.

One thought on “Die Robot, Die! Backlash Against AI-Powered Tech Is Deeper Than People Think

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading