Wow, another Black Hat USA and DEF CON are in the books, and it was great seeing everyone. One of the best parts of conferences is the conversations, and those conversations were amazing. As you can imagine, many of them were about “AI.” Since there were no cameras in the AI Security Challenges, Solutions, and Open Problems meetup and it will be a while before the Forward Focus: Perspectives on AI, Hype, and Security presentation makes its way online, I thought I’d summarize a few points as well as distill some of my perspectives on the topics I covered and conversations I had, now that I’ve had a few days to reflect.
Perspective on LLM Impacts
I deal with so many people making nonsensical or unfounded claims that I wanted to make it clear where I stand on the subject of LLMs and their impact on humanity. When you live in reality, you tend to be labeled a hater.

I’m not big on making predictions, but let me say this with a fair amount of confidence, LLMs will not be more impactful on humanity than the printing press, and GPT-5 won’t achieve AGI. Those of you who know me will find the fact that I’m in the middle unsurprising, but hey, the only technology I hate is PHP 😉
All AI All The Time
As was expected, everything was all AI all the time. Every vendor booth had the term “AI.” AI-powered products, AI pen testing, AI assurance, AI, AI, AI! Everyone is ALL in. Even though I expected it, being confronted with the term absolutely everywhere was still shocking. What we’d poked fun at in the past has become our reality. Everyone is trying to ride the wave to success, regardless of their skills or capability. It would be easy to blame this on marketing departments, but it was far more than that.
All references to machine learning seemed to be scrubbed in favor of using the term “AI.” Seems machine learning is having its “cyber” or “crypto” terminology moment. I learned long ago that fighting the industry over terminology is a losing battle, so yes, I’m giving in to the massive, crushing weight of hype, and I’ll move the battlefront to somewhere else.
Losing the terminology battle isn’t without drawbacks.
Still, losing the terminology battle isn’t without drawbacks. It seems many are also using the term AI synonymously with generative language models, which just muddies the water more. When you mention that you think the capabilities of LLMs are overhyped (i.e., not going to be more impactful than the printing press, etc.), people tend to throw out things like drug discovery or AlphaFold. When you point out that those are different approaches and it’s not like ChatGPT is doing that, they tend to still cling to adjacent success in specific domains as an indicator of success here. It’s like being in a VW Bug and pointing out that a Ferrari can do over 200 mph.
This is also a shame since many more traditional machine learning approaches aren’t even considered as people rush to LLMs, even approaches that are more reliable and proven for specific security problems. I think this will level out at some point, but not anytime soon. Time to put LLMs on the moon!
Where People Stand
The consensus from many I talked to is that they were just trying to figure out where they stood. They’ve heard so many outrageous claims, and the reporting on advancements has been so all over the place. On the one hand, you have people claiming GPT-5 is going to be AGI; on the other, you have people advocating military strikes against data centers. It’s no wonder people are confused.
Given the wild reporting, outrageous claims, and AI hustle bros trying to get you to subscribe to their channels, I was surprised that most people were pretty grounded. Many didn’t think AI would take their job or that the ChatGPT Plugin Store would be more impactful than the mobile App Store on humanity. I found this incredibly refreshing.
I suggested to the people I talked to that whenever you hear someone spouting outrageous claims, ask them why they think that. People making outrageous claims about LLMs often try to drive attention into their funnel. They want people subscribing to their Substack, YouTube, Mailing lists, etc. They can make these claims and never have to justify them, never have to give examples or show real-world impact. The rest of us have to live in a reality where our software has to work, scale, and be reliable. So, beware of people making claims without providing specific examples. Also, stories in the news often don’t reflect realities on the ground.
Fooling Ourselves Is Easy
The social contagion status of ChatGPT highlighted a vulnerability in humans, and that’s that we are very bad at creating tests and very good at filling in the blanks. The world is filled with experiments, and highly-cherry picked examples. We tend to see a future that isn’t there. We often forget that the world is filled with edge cases, which confuse many of these AI systems.
The social contagion status of ChatGPT highlighted a vulnerability in humans, and that’s that we are very bad at creating tests and very good at filling in the blanks.
Look at self-driving cars, for instance. We see a demo of a self-driving car properly navigating the roadway, and we assume that truck driving as a profession is doomed almost immediately. It seems like one of the easier problems, stay in the lane, obey the signs, and don’t hit things. Boom! But anyone who’s driven a car knows that edge cases are everywhere. Road construction, lighting conditions, snow, accidents, etc. Humans handle these conditions pretty well, by contrast.
Supercharged Attackers
LLMs won’t supercharge inexperienced attackers
One point I brought up in the meetup and during our panel, was that people made similar claims about Metasploit supercharging inexperienced attackers when it was launched over twenty years ago. People made claims that Metasploit was like giving nukes to script kiddies. Those comments didn’t age well, and I think the same is true about LLMs. You still have to know what you are doing when using LLMs to attack something. It’s not like point, click, own. Also, it’s not like LLMs are finding 0day or writing undetectable malware. I know. I’ve seen the research and reports. Neat research, but it’s not like it’s overly practical for attacks at scale.
People made claims that Metasploit was like giving nukes to script kiddies
Today, most malicious toolkits you hear about, like FraudGPT, WormGPT, and many others that have popped up, are primarily tools for phishing and social engineering attacks (despite having “worm” in the title.) This can certainly have an impact, but not on the apocalyptic levels that some would have you believe. All of this technology is indeed dual use, so something that’s helpful for security professionals will also be helpful for criminals. Just like we have people hyping AI on the clear web, you have people hyping AI on the dark web.
Losing Your Job To AI
Most people I talked to didn’t seem overly concerned about losing their job to AI, but I got the feeling that it was in people’s minds regardless. The recent sting of many layoffs is probably not helping the uncertainty. This was one of the points we tried to address from the stage at Black Hat. I used the example of AlphaGo. I asked the audience how many people had heard of AlphaGo beating Lee Sedol at Go. I was surprised that very few hands in the audience went up since it was big news at the time. I then asked how many people had heard of the research from Stewart Russell’s lab that allowed even average Go players to beat these superhuman Go AIs. No hands went up.
My point was that there is a lesson here for security professionals. These new technologies tend to have their own vulnerabilities and issues that also need to be addressed. In addition, all of these technologies have gaps, and the gaps will need to be filled. So, for the foreseeable future, your job is safe in the context of information security. We’d have a much different conversation if you were a freelance graphic artist.
Misinformation and Deepfakes
I was a bit surprised by the fact I didn’t hear any conversations about misinformation and deepfakes. I’m sure they happened, but not at any of the events or conversations I participated in. The only time it was brought up, it was brought up by myself in conversation. I have a rather spicy take on the 2024 US Election. I think misinformation and deepfakes will have a statistically insignificant effect on the 2024 election. I will address this in a future blog post, but in summary, people have already made up their minds and cemented their biases.
It’s not that these issues aren’t important or impactful, just in context, not significant. I wrote about this topic back in 2020 when I relaunched my blog. Interestingly, in that post, I also mentioned the people who should be most concerned about the technology powering deepfakes: actors and actresses. Very relevant now with the SAG AFTRA strike and AI being a big concern.
Social Impacts
There were virtually no conversations about the social impacts of Generative AI other than the conversations I initiated. This isn’t surprising since it’s a large focus of my blog, and I spend a lot of time thinking about these topics. Seems most people were focused on use cases and capabilities. My fellow tech people are often optimizers and look to optimize everything. They don’t realize that friction is the point in certain cases.
I think the chatbotification of everything is something humans are starting to tire of.
I think the chatbotification of everything is something humans are starting to tire of. When someone launches a new service, you have this quick uptake due to the novelty factor, followed by a steep drop-off. We are about to enter an era of celebrity and historical figure chatbots, I think the same curve applies.
We’ll see lots of press, rapid adoption, followed by a steep drop-off. This could be due to boredom, lack of true functionality, or even something more primal, which is the sort of “fake factor” of it all. We know we aren’t actually talking with Harriet Tubman when we use the chatbot. What seems kind of fun at first starts to take on a tarnish very quickly. As tech people, we get so caught up in the cool factor of the technology we build that we tend to forget the human factor in all of this. I think I’m on the right track here, but I realize I’m also old and have never played Minecraft, so I could be wrong.
Customer support chatbots, the ones that are directly customer-facing, have some promise, but only if they are empowered to take the action necessary to resolve the issues that customers are having. On the flip side, having an empowered chatbot also opens the door to manipulation. So this, too, has issues. My gut tells me that as organizations launch empowered bots for various things, there will be subreddits dedicated to manipulating them. This manipulation could be for fun, getting discounts, or stealing services. Time will tell.
There’s certainly some promise in hybrid workflows pairing humans and bots together, where the human is actually the one in first-party contact with the customer. This may be the ultimate path, but something tells me the replacement path will start first, and hybrid will be the fallback.
Prepare To Be Surprised
In my closing statement at Black Hat, I mainly told people to prepare to be surprised. There are lots of experiments and money pouring into the space. Anyone who thinks they have they can see the future here would be fooling themselves. The whole thing is simultaneously exciting and scary. The best thing people can do is remain grounded but also play with the technology. Don’t sit on the sidelines, generative models are pretty accessible. Play around and apply it to some of your use cases. Above all, have fun.
One thought on “Post-Black Hat USA and DEF CON AI Thoughts”