On a daily basis, I’m bombarded by fantastical news stories and email articles about ChatGPT and how 2023 is the year it’s going to replace “X” job or profession. It seems that no profession is safe. It reminds me of the Dewie The Bear scene from the movie Semi Pro where Will Ferrell screams, “Everybody panic” in a crowded coliseum.
ChatGPT has gone from being a piece of technology to a social contagion right in front of everyone’s eyes. Claims are being made that people can’t possibly believe, but they are making them anyway. I wrote last year about how ChatGPT creates amateur futurists. Well, it seems the floodgates are open.
This has created a group of overoptimistic zealots who’ve basically taken on the identity of crypto bros pumping ChatGPT to 10x. On the flip side, I’m certainly not delusional about the job loss myself. It’s true that generative tools will certainly have an impact on jobs. To take this further, I also expect some surprises and unexpected cases to crop up. The world is a complex place and it’s just not possible to take in all the variables. There are lots of people working in this area, and automation is going to have an impact on multiple professions, even if it doesn’t eliminate them. So, fair enough. But, the extent of the current mania is mind-boggling, with people tossing all manner of delusional predictions at the wall, hoping one sticks.
So, I wanted to add a bit of sanity, so anyone can make sense of this topic and at least try to frame these news stories in the appropriate bucket.
Evaluating News About ChatGPT and Job Loss
There’s a simple way to evaluate the merit of these job loss arguments, even if you aren’t familiar with the technology. Just ask a simple question. Is the cost of failure low? Boom, that’s it. If the cost of failure is low, then there’s a good chance there’s a near-future risk of impact from these tools. If the cost of failure is moderate or high, then there’s little chance of impact in the near term with the current crop of AI tools.
In my previous post, I wrote about how freelance artists will be impacted by the pervasiveness and accessibility of these tools. If you don’t like the art generated by the tool, just generate another one. On the flip side, think about the impact of having ChatGPT be your doctor or lawyer, especially given the fact that these models tend to hallucinate facts.
I know ChatGPT passed the Bar exam, but why is this surprising? I mean, I think most people would be able to pass the bar given an open book and unlimited time. Knowing the answers to questions is a far different matter than applying the knowledge you have to specific situations. I certainly wouldn’t want ChatGPT to argue my case in court, even though creating convincing BS seems to be one of its strong suits.
I’ve stated before my biggest concern with all of the ChatGPT hysteria is that people in their mad scramble to compete will end up using this technology in areas where the cost of failure is not low, where there’s the potential for harm and even loss of life.
I worry quite a bit about mental health uses and the medical field in general. Mental health chatbots should be a bit red flag for us. There have been promising uses in using computer vision models to assist doctors in identifying whether tumors are malignant or benign, but that’s far different than having a knowledge system like ChatGPT. Even if these tools reach the status of “pretty good” it will lead to an automation bias where the doctor takes the recommendation of the system by default. This condition would, in effect, make something like ChatGPT, your doctor. Would this be better or worse than clicking through WebMD and diagnosing yourself?
I think people, in their optimism, tend to fill in the blanks, even in the case of very complex problems. In such cases, assuming we are only a couple of tweaks away from solving existing issues. It’s dangerous to bestow some mystical object status on technologies such as ChatGPT when what we need is a realistic analysis of the capabilities and limitations of such approaches. We tend to underestimate the complexities of even simple problems, which makes humans terrible at predictions, but our over-optimism could lead us down a very dangerous path in the next couple of years as these tools creep into critical decision paths.