Today, I wanted to write a quick post about something I continue seeing and getting asked about. Earlier this week, I was quoted on Dark Reading that I agreed with the NCSC that cybersecurity threats from ChatGPT were overhyped. So I get asked, if ChatGPT won’t supercharge attackers, then why do I keep seeing article after article to the contrary? There’s a simple explanation, what you are seeing is parroting.
I saw this article yesterday. Analysts Share 8 ChatGPT Security Predictions for 2023. Let me save you the trouble of digging into the article. All of the predictions about lowering the barrier for criminals and supercharging attackers aren’t based on any reality. There is no evidence for any of this. If that were true, we’d already see a meaningful impact right now, but we aren’t. These predictions are written by people who’ve never used ChatGPT in any of the contexts they are making predictions in. People are just repeating other people who’ve also never used ChatGPT for this purpose. With hot topics come hot takes. But this creates a strange false consensus, similar to a filter bubble on social media.
When people think something has become a matter of consensus, psychologists have found, they tend not only to go along, but to internalize that sentiment as their own.-Max Fischer, The Chaos Machine
There are also a lot of reports of conversations about ChatGPT on the Dark Web. Here is an example. Reading articles like this, you are meant to believe that criminals’ use of these tools is accelerating, but there’s another explanation. Just like all of us are talking about ChatGPT and exploring the surface of capabilities, it makes sense that criminals would as well. I’m just applying Occam’s Razor here. Talking about a topic on a forum isn’t industrializing that something in attacks. I haven’t dug into the content of any of these posts, but my gut tells me they are experiments and not any kind of industrialized real-world attacks. That doesn’t mean attackers aren’t riding on the hype of ChatGPT to launch attacks. We’ve seen this play out with fake ChatGPT tools with embedded malware.
That Doesn’t Mean These Tools Aren’t Useful
You can say that you believe in machine learning and deep learning’s ability to help professionals address real cybersecurity challenges (which I wholeheartedly believe) and still be critical of the hype surrounding tools like ChatGPT. There are certainly advantages to using these tools. When it comes to LLMs, people seem to discover that more mundane aspects, such as text summarization, text generation, constraining to a knowledge base, etc., help solve business problems and increase efficiency. People may not realize that LLMs have been pretty good at these tasks before ChatGPT came along.
Overall, I think we should be prepared to be surprised. People find interesting and unexpected ways to use technology. That’s the great part about human ingenuity. Just be mindful that because a lot of reporting seems to be saying the same thing, that doesn’t make something true. Controlled lab experiments aren’t the real world and any accuracy and usage statistics from providers should be taken with a grain of salt. Think critically about reporting on this topic, even my own. Nobody made me the Chief AI Whisperer. I’m judging my perspective looking at the realities, capabilities, and limitations of these tools, not filling in the blanks and shouting from the rooftops like some crypto bro saying how ChatGPT is going to 10x everything.
I have a longer post that goes more in-depth on some of these issues and misconceptions for next week. For example, there are real cybersecurity threat from these tools, but it’s in their integration into applications without consideration of the attack surface. More to come.