The tidal wave of information on AI use smashes the shoreline daily, nearly all of it universally positive. News stories, analyst reports, and anecdotes all lead you to believe that you are already in the dust, no matter how advanced you are. Your competitors are smoking you, and everyone is using AI for everything successfully except YOU. This is the massive headwind many of us pushing back find ourselves in, constantly bombarded with news stories and analyst reports, all in service of telling us we are mistaken. A congregation was sent to consult the Oracle of Gartner and your perspectives have been found wanting.
In the space we refer to as reality, what we think we know about AI usage is wrong. So, how did we get here? How have we become so misinformed? The answer is pretty simple: humans. Okay, well, more specifically, surveys and interviews.
Surveys and Interviews
It’s long been known that survey data is only slightly more valuable than garbage, but when it comes to AI, survey data can be a fully engulfed dumpster fire. There are several reasons for this, but the primary reason this is so bad in the AI space is that nobody wants to look stupid or appear behind the curve. So when the analyst, survey taker, or journalist calls, people start parroting.
The primary reason this is so bad in the AI space is that nobody wants to look stupid or appear behind the curve.
Instead of responding with observations they’ve made or activities they are actually doing, they respond with something they’ve heard, articles they’ve read, experiments they hope work, and a host of other things that aren’t true activities. This equates to people expressing their vibes. This disconnection leads to an opening chasm with reality. Since surveys and interviews are the primary methods to collect this type of usage data, that doesn’t bode well for determining realities on the ground. With the hype turned up to 11, a red flag would be when your survey results confirm a 10.
I’ve pointed out this parroting vs. observation issue in my presentations at various conferences for the past couple of years. Although this parroting makes for some wildly comical analyst reports and news stories, it’s rough if you’re trying to make decisions based on them, or worse, when your boss expects you to produce a magic wand and summon the guardians of innovation because you are being left in the dust.
A few days ago, I read an article from the Ludic blog making the rounds that contained the following image.

This is an obvious red flag, and the author points this out in much more eloquent and spicy language. We’ve long known that most AI/ML/DL projects don’t make it into production, but all of a sudden, LLMs come along, and 92% of companies are finding great success. It’s not real. Speaking of 92%…
GitHub reported last year that 92% of US-based developers are already using AI coding tools. The gut reaction is this feels wrong, but hey, it must be true if the data confirms it, right? So, let’s do a thought experiment. Imagine standing in the frozen dessert section of the grocery store, asking people if they like ice cream. Now imagine asking everyone buying ice cream if they like it. What if you only asked two people, or five people, or ten people?
When it comes to usage data, what does “using” mean? What is the definition put forth in the survey? What is the makeup of the population? Most importantly, what do they define as “AI”? All of this matters, and it doesn’t take much imagination to realize how incredibly biased survey data can be. The flames are further fanned by the illusion that models have more capabilities than they do and companies faking demos.
For a deeper response to some of the common points people make, read the article I mentioned. I have some quibbles with some of the article’s content, but all in all, it’s a solid read, and the spicy language makes it all the better.
In a previous post on GPT-4 Lowering Conspiracy Beliefs, I addressed some of these issues surrounding surveys and survey data. I called attention to dark data categories that often surface when surveys are used. I also recommended David Hand’s excellent book Dark Data: Why What You Don’t Know Matters. The book will change the way you view surveys.
The unfortunate reality is that quite a few people have a vested interest in perpetuating these misconceptions. You’d think this would be the companies building these products since it increases their revenue, and this is certainly happening, but most of them aren’t affiliated with these companies. They want to be seen as the ones with the knowledge. They are influencers trying to drive people to their funnels and people in the tech industry who don’t want to look clueless. It’s hard for people to call you out on something when you are saying the same thing everyone else is saying.
Another red flag was shortly after ChatGPT was released. We were inundated with articles quoting opinions by leaders and executives who had never used the technology and had no idea how it worked or even what it was capable of. But it seemed as though we couldn’t get enough.
Dumpster fire achieved.

Ask Questions
We aren’t helpless in these cases. One of the best defenses is asking follow-up questions and probing beneath the surface. I know, I know. We pay (INSERT ORG HERE) a lot of money, and they say… But bear with me a moment.
One recent technique I’ve used is marking up reports, slides, and other information sent to me to help people focus on obvious issues and force some deeper thought. This gives others an idea of where I’m coming from and helps plant the seeds of these questions in people’s heads. Typically, these reports create more questions than they answer, and responding with, “This is dumb,” is not the best tactic. Here’s a recent example I used for a report discussing GenAI’s security use in 2024.

Along with this markup, I also included data in the email questioning the statistical makeup of the data used in the analysis. Funny enough, for this particular section, there was no information about the sample size, industry verticals, or other important information about the makeup of the sample. This is always a red flag. Maybe it was mentioned somewhere else, and I missed it, but it wasn’t available in this section like in the others.
Often, even asking a simple question, “How” can be super effective.
“Generative AI is completely transforming X business or process.”
“Oh yeah? How?”
The questions of how, what, and where can be your ultimate weapons in defense against some of this contradictory data. They inform you if there is something real and help you understand if the use cases proposed to support the strongly worded statements made. There may be good answers to these questions that you may want to consider. There are legitimate use cases, and you do want to stay ahead of the curve, so being better informed helps you take advantage of opportunities.
Misunderstanding the data has negative impacts, putting further strain on your resources to create competing solutions or wasting time trying to recreate something that isn’t even working in the first place. Even if another organization successfully uses generative AI for a task or process, you might be unable to replicate it due to different applications, systems, data, and processes.
Even if another organization successfully uses generative AI for a task or process, you might be unable to replicate it due to different applications, systems, data, and processes.
I’m not bashing analysts or survey takers. Conducting surveys without influencing the outcome is hard. That’s why you can find surveys that confirm just about anything. I’m sure the people writing these reports believe what they write, and it matches the data they have.
Conclusion
The grouping of technologies under the umbrella of AI is certainly useful, yes, even LLMs. Non-generative approaches and more traditional ML and DL have been deployed to solve challenging problems for decades. These approaches are already baked into the systems we use. However, the hype and hysteria throw off any real perception, and you often find that complete transformation aligns more with hopes than realities. Ask the right questions and probe deeper to ensure you are making decisions on the right insights. Find use cases of your own and perform your own experiments. You’ll quickly see what’s working and what’s not.
One thought on “The Misinformed Landscape of AI Usage”