Happy New Year! 2024 is in full swing, and already, people are coming out with their big, over-the-top AI predictions. This was to be expected, but I think it’s helpful to calibrate the conversation so that people have a point of reference in their attempts to make sense of these predictions. Even though an overwhelming majority of AI predictions are nothing more than nonsense, it’s helpful to have a simple framework to quickly evaluate the claims in a prediction and determine if even very smart people you respect are saying nothing at all.
AI
The term AI has become so generic that it encompasses technology and approaches that have been both invented and yet to be invented. Anything falling under the umbrella of “automation” is now called AI. Given this generic nature, how do you even know what people are referring to when making claims and predictions?
In my presentations at conferences and events in 2023, I told the audience the first step in making sense of AI advances is understanding what people are referring to in the first place. In this post, I hope to add some clarity around this topic.
Published AI Predictions
Let me tell you something you may already know: most people’s published AI predictions are nonsense, falling into the categories of pure guesses, wishful thinking, or pointless parroting. These predictions only serve to drum up hype and marketing buzz. Predictions are often highly biased based on who is making them. Even my own 2024 AI predictions are biased based on the fact that I’m a security researcher. Worse, quite a few have absolutely no visibility or exposure to the technology trends they are predicting.
A fun experiment is to look at the person generating these predictions and their position at the company and guess their prediction. This can be done with a high degree of accuracy. Still, there are others who, when you read their thoughts on AI and their predictions, can tell they’ve never used the technology they are discussing. Yes, please give me more of that person’s predictions.
There are perverse incentives all the way around, from people making the predictions to the media organizations pumping them out for clickbait. It’s no wonder people are confused and unable to understand where things are.
Speaking of perverse incentives, there are the AI Influencers. Don’t get me started. Anyone who missed the cryptocurrency craze and wants the full crypto bro experience only needs to subscribe to their content because everything is 10x-ing around there.
So, what about the leaders of the AI companies? They must be safe, right? Well… eh.
It would be best if you took Sam Altman’s or any other AI organization’s leaders’ AI predictions with a grain of salt. Not to be too cynical, but they are never giving an answer in service of your benefit. Even their critical assessments of AI, such as their concern about AI risks, boost the image of how powerful AI is and further fuel hype. It’s like a human extinction, humble brag.
It’s like a human extinction, humble brag.
So, before you take a prediction seriously, you have to consider a couple of things.
- Would this person have some insight into the trend?
- Do they have an incentive to say or not say certain things?
Now, let’s move on to our framework.
Framework for Making Sense of Predictions
Given that absolutely everyone is making AI predictions, how do we even begin to make sense of these predictions and whether they should be taken seriously? Although there is no hard and fast answer, a simple framework can help reduce the nonsense.
When evaluating AI predictions, look at a couple of factors. What specific technology is being referred to, what is the prediction timeframe, and did they provide a reason? You can filter out most of the absurdity using only technology precision and timeframe.
| Technology Precision | Timeframe | Reason Provided |
| Weak | Wide | No |
| Strong | Narrow | Yes |
To make sense of this, let’s look at an example below. This was published a couple of days ago by a well-known security professional. Now, this is a person I respect, and I even enjoyed their last book, so I’m not singling them out. It just happened to be the latest example I’ve seen.
“AI changes everything,” PERSON tells MEDIA on a video call. “The AI revolution is going to be bigger than the internet revolution.”
It’s easy to scratch your head and ask, “WTF does this statement even mean?” Well, it means absolutely nothing, but in terms of our framework, this would be classified as Weak, Wide, and No. What AI approach is this person referring to? When will It be bigger than the internet? There are countless examples of similar overly generic declarations, and I’d wager that a vast majority of people making these claims couldn’t answer with any precision or timeframe.
Without qualifying the precision and a designated timeframe, the statements say absolutely nothing. It’s window dressing for drivel. You might as well say, “Cyborgs will change everything, and the cyborg revolution will have a massive impact on humanity.”
I don’t think that many doubt at this point that some AI technology will transform humanity at some point, even in the near future. Science fiction writers have been envisioning this for quite some time. This is why details matter.

Here’s another generic goodie. Yeah, but what and when bro? Remember, people can purposefully wield being vague as a superpower. If you are vague, you can fuel the hype, wiggle into alternate descriptions, and never worry about being called out on anything specific when it doesn’t happen. This is the Nostradamus playbook: be vague so that everything fits. Musk’s statement was made a few days before Grok was launched, so there’s that.
As a general rule, you can toss out anything falling into the Weak Precision and Wide Timeframe category, as it can mean everything and nothing simultaneously. Most AI predictions I see fall into this category.
Let’s look at some more predictions. Here’s another example below.

These predictions weren’t accurate either, but at least they have strong precision and a narrow timeframe, even without reason. I know he’s generically referring to “AI” here, but through another context, he’s referring to Large Language Models. Sometimes, you have to infer a bit of surrounding context with these. None of these came to pass, but it illustrates another important lesson.
We reward people for being bold, not for being right.
In our modern, social media-driven, self-promoting, nonstop content-producing world, we reward people for being bold, not for being right. People making wild, baseless predictions continue gaining followers and directing people to funnels and newsletters that further allow them to monetize content, which seems (and is) entirely backward from what you’d expect. It’s part of what I call the influencer hype circle. It’s possible to outrun your predictions. People have such short attention spans you can make precise predictions, and a few hours later, people won’t remember, much less months or even a year later.
So, why do we get sucked into these things?
There are a couple of psychological tricks at play. We become desensitized to things we see more often, making more outrageous claims seem not so outrageous. We also tend to assume there must be a consensus the more we see something. The more hype-confirming things we see, the more people say hype-confirming things, which leads to us seeing more hype-confirming things, and over and over, building a false consensus. Hype doesn’t have to be pro-AI hype either; existential risk hype or hype around AI-generated misinformation can fall in the same category.
Note: In fairness, you have to cut people some slack. There can be a big difference between what you tell someone in an interview and what gets cut up and printed. Also, media stories try to be brief. They are likely to print predictions without providing the reason. So YMMV.
Reason
We end by evaluating the reason. By looking at the reason, you can determine whether to put stock in the predictions and the person making them.
- Does the reason make sense?
- Does it apply to what’s being predicted?
Keep in mind the reason can be overly vague as well. “Technology is moving so fast that it’s inevitable.” That’s not a good reason. “Everyone seems to be saying…” Also, not a good reason. We got an early glimpse of this with ChatGPT. So many people made so many claims about how it was going to be more impactful than the printing press. When you’d listen to their reason, it was because it gave them a recipe in the style of Shakespeare or something similar. It’s hard to put a lot of stock into people who are basically fooling themselves.
Reasons are going to vary along with the amount of included details, so use your best judgment in your evaluation.
Conclusion
Hopefully, this post provided some basic ideas for filtering some of the noise. There are truly insightful people who share their thoughts, drowned out by a cacophony of voices saying nothing much. We are also nowhere near peak AI predictions, so using a basic framework can help reduce the noise and get you to what is important faster. Welcome to 2024.
2 thoughts on “Framework for Making Sense of Human AI Predictions”