Perilous Tech

Occasional thoughts on technology and social issues

Whether we want them or not, chatbots are coming to search engines. Google announced Bard and Microsoft implemented ChatGPT style integration with GPT-4 from OpenAI in Bing. Well, at least people are talking about Bing again, which hasn’t been mentioned in a conversation since its launch.

All jokes aside, this begs the question, is search really ripe for disruption? Many in the tech world seem to think so or at least hope so. My guess is it could be. For a long time, the optimization around search has been to put more eyes on more ads. The question is, is the current crop of chatbots the answer?

Implementations

Just like the answer to most questions regarding the use of new technology, it depends on the implementation. I haven’t really thought about search for quite some time and as a researcher, I admit that my usage of search engines may differ quite a bit from an average user. But, logically, it seems that when people use search engines, they are looking for answers to simple questions or information on current events.

If you look at the trending searches on something like Bing, all of these are current events.

Trends on Bing Search

When it comes to information about current events, it will be interesting to see what, if any, impact there is from chat in this space. The current crop of chatbots like ChatGPT were trained on data from 2021. Obviously, a breaking news story won’t be in its memory.

The search engines also seem to be exposing additional functionality in much the same way the ChatGPT demo did, which does open up some interesting possibilities and problems. For example, when people look up medical information and get disinformation or advice in general and get bad advice. People may create different relationships with search engine sites than they have previously, and this isn’t always possible to anticipate ahead of time.

Just the facts, occasionally.

Sometimes you just want the answer to a question. What’s the circumference of the earth? What’s the Tom Hanks movie where he made a wish and transformed from a kid into an adult? Was William Shakespeare a real person? In these cases, it would seem that a chatbot with accurate information could be beneficial and preferential to traditional search. Even in more complex scenarios where you just want an answer, you won’t be bogged down by additional content or need to search different sites to confirm information. This would be great in a perfect world.

But…

We all know the current crop of language models tends to hallucinate facts. Examples of this are all over the web. How this is handled will largely dictate whether these features are useful or useless. I personally don’t think we are anywhere near having a universal knowledge machine that falls within the bounds of acceptable failure and that seems to be playing out in this scenario as Alphabet loses $100 billion dollars as their shares tank after Bard hallucinated facts.

Guardrails

If we conduct a thought experiment where we have a perfect knowledge system, there are still plenty of problems to think about. Consider more controversial subjects or even topics that haven’t been settled. How will these chatbots handle these situations? How will it boil a complex topic down to a simple response? There’s been a heavy-handed approach to the guardrails on these topics, so much so that it borders on the absurd. It’s important as a society that we address uncomfortable topics and come to a resolution. It’s not like we’ve settled everything there is to settle and we can all go home.

My guess is that heavy-handed guardrails will remain in place in the spirit of “safety.” As a matter of fact, it’s already happening as Microsoft’s chatbot refuses to write a cover letter for someone saying it would be unfair to other candidates. So, the most useful features may be out of reach of the people who need them most.

Unfortunately, this filtering will lead to a lot of technology blame as people lead with their biases. We’ve seen this game play out on social media.

Nudges and Forgetfulness

The obvious failures will continue to be identified in the system, but what about the not-so-obvious failures? What happens when something disappears and we don’t notice? We seem to think historical events are seared into our consciousness or as though the Internet doesn’t forget, but it does. Imagine what would happen if Google suddenly stopped returning responses for a historical event. Would that event, in effect, disappear? Probably, for a great many people anyway, depending on factors such as the cultural importance of the event and its age from memory.

This removal of events could be purposeful, such as Chinese search engines not returning results for the famous “Tank Man” image in Tiananmen Square. Baidu even took it a step further and made sure their image-generating software, a competitor to DALL-E, wouldn’t generate a version of the photo either.

What’s even scarier than any purposeful manipulation is when the system accidentally forgets things.

What’s even scarier than any purposeful manipulation is when the system accidentally forgets things. Events could disappear and be lost from collective memory. Equally as bad is that the system hallucinates “facts” around an event, changing its focus. What would the implications be if this happened to a world event as impactful as The Holocaust? Maybe it didn’t disappear completely but the facts around the event changed. What if fifty years from now, the memorials around the event were the only reminder? What if we started to question why we built them in the first place? This absolutely terrifies me and it should terrify you.

For my previous example, I chose the most extreme case to illustrate a point. It’s unlikely that an event such as The Holocaust would be lost to history, but getting the facts wrong is certainly possible. Often the facts shape the impression of the event. This is a conspiracy theorist’s dream.

We need to be careful in the rush to implement a technology, that we don’t lose or manipulate things we actually care about. Could these systems nudge us into forgetting or not caring the way we should? The answer is yes. Will they? That remains to be seen.

New Perverse Incentives

Chat-based knowledge systems make quite a bit of sense for search engine companies as they are data collectors. There’s an old saying that goes something like this, “People lie to their friends, but nobody lies to Google.” Well, nobody is going to lie to a chatbot either, and collecting this information is going to be a gold mine of personal data. This is a power-up to the current crop of perverse incentives.

I made the following joke on Social Media.

[chatbot enabled search engine] What’s the best burger place in town?

[response] Billy’s Burger Joint has the highest rating. Oh, by the way, did you know you could save hundreds by switching to GEICO?


It’s a joke with some truth to it. There’s no way to ad-block that.

There will certainly be hands on the scale nudging people in particular directions. Sure, search engines can manipulate results today, but delivering an answer in the form of a knowledge base holds more of a user’s attention and gives it appears to have both consensus and authority. Imagine giving bad advice, like throwing batteries in the ocean, or if the GEICO commercial was much more sneaky and was part of a recommendation.

Bad For Privacy

Regardless of perspective on usefulness, we can all agree that chatbot integration in search engines is very bad for privacy. This is in an environment where additional logging and analysis will have far greater scrutiny by human eyes, at least in the short term, due to analyzing and improving the implementation. Richer forms of personal data and even thoughts will be collected, stored, and analyzed with the intent of weaponizing this information.

We can all agree that chatbot integration in search engines is very bad for privacy

Are we getting features we didn’t ask for?

I don’t necessarily blame the tech companies here. Sometimes with innovation, you don’t know you want something until you see it. That’s what Silicon Valley is hoping for. Only time will tell if these features truly enhance the search experience, is useless, or turns it into a nightmare. It seems to be leaning toward the latter at the moment.

Regardless of getting things we didn’t ask for in search, it’s certainly coming for other products. 2023 is going to bring a lot of things we didn’t ask for because, currently, these companies are justifying their investments.

Microsoft announced that it was bringing ChatGPT features to Microsoft Teams. This is a head shaker for me. I need to restart Teams up to five times a day due to issues, but now it’s getting even more features. Microsoft Teams now resembles the big box store PC you get loaded up with bloatware.

Microsoft Teams now resembles the big box store PC you get loaded up with bloatware

All of this rush to implement technology we know has issues in production systems reminds me of an Arthur C. Clarke short story called Superiority. In the story, a technologically superior planet is defeated by another one because of their willingness to discard old technology without having perfected the new. Just another example of how fiction can inform reality. In the future, will fiction be our reality?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: