Perilous Tech

Risks at the Intersection of Technology and Humanity

So, let’s talk about posthumanism for a moment. Yes, posthumanism is actually a thing, and it can sound like a rather odd movement to cheerlead. After all, we as humans aren’t done being human yet. Posthumanism’s adherents are anxiously awaiting the next stage of human evolution, homo technologicus. Yes, it’s also a real thing. I’ve also heard terms like techno-progressivism thrown around. As serious as some of these people may be, their concepts are surrounded by techno-utopian bullshit.

As amazingly silly as this sounds, their views aren’t far off from those of many people these days. Everyone from pure techno-utopians to level-headed “normal” people is kinda thinking the same thing. Let’s slap a bunch of tech inside our bodies and see what happens.

My goal with this post isn’t to address all the narratives or poke even more holes in the logic. I’m writing a book covering this and other topics. For this post, I want to point out a few glaringly obvious issues that should get more attention. The point of this post is that there is no free lunch regarding human augmentation.

Human Augmentation Must Be Universally Good, Right?

I never cease to be shocked at the casual nonchalance of people discussing slapping a bunch of tech inside their bodies, melding our brains with machines. I realize there’s a cool sci-fi aspect to it, but in real life, we have things called consequences. It’s different if there is a cognitive or motor impairment that the technology corrects for, another thing entirely when no impairment exists.

As a security researcher, I can’t bring myself to imagine these systems not being vulnerable to attack and, almost as bad, being used to manipulate us. We like to think of ourselves as the pillars of agency, but in reality, we can be nudged to do all sorts of things, resembling more automatons than humans.

This means that any of these systems would need to have a safe technical baseline. For a basic framework of a safe baseline, see the SPAR categories I’ve outlined previously.

I could address many other technical issues, but for the sake of this conversation, let’s call it a perfect technical implementation. A cognitive symbiosis of mind and machine without any technical issues or glitches. It is a completion of the techno-utopian dream.

Let’s look at why, even in a perfect implementation, there is still no free lunch.

Socrates

To look forward, let’s look back. This is Socrates. Totally not a fake photo, by the way.

Socrates has become a popular punching bag for the AI crowd. Apparently, dunking on a 5th-century BCE philosopher has become some sort of modern-day sick AI burn. So, what sin did Socrates commit that is so egregious to AI leaders today? He was against writing things down.

Socrates worried that writing things down would affect his memory, so he became a punching bag. However, what many don’t realize is that he wasn’t wrong. Writing things down can negatively affect your memory.

We can’t seem to imagine the past without viewing it through the lens of the present. People’s memories were far better in the past than they are today, even pre-social media and the attention apocalypse. It doesn’t take much thought to recognize this. In ancient times, when most people couldn’t read or write, the only place to store knowledge was in their heads. Even asking someone else, you were querying tribal knowledge stored in someone’s head. To his credit, Socrates stumbled onto cognitive offloading and recognized one of the effects.

Ultimately, we are better off for writing, and the benefit of writing things down far outweighs the benefits of a localized, tribal memory, even if individual personal memory is decreased. There are also other interesting effects of writing that Socrates missed, such as exploring thoughts and ideas and some of the memory-reinforcing effects. So, let’s forgive a 5th-century BCE philosopher their faults and focus on what he recognized for a moment: cognitive offloading.

Cognitive Offloading

Cognitive offloading is using physical action to alter the information processing requirements of a task to reduce cognitive demand. We all do this every day. If you’ve ever left yourself a note or set up a meeting in your calendar application, you’ve performed cognitive offloading.

This activity is beneficial since we only have so much cognitive capacity. It’s not just memory but decision-making skills as well. There’s a famous story about President Obama and why he only wore gray or blue suits. He was paring down his decisions.

I know it seems I’m making the posthumanist argument for them, but bear with me. Not all cognitive offloading is the same. In 2016, I heard the evolutionary biologist David Krakauer discussing cognitive artifacts on the Making Sense podcast. This was in the context of discussing complexity and stupidity. He referred to complimentary and competitive cognitive artifacts.

Without being too wordy, complementary cognitive artifacts help you create a model of the problem and are tools that rewire our brains to make problem-solving more efficient. These are things like maps, language, and even the abacus.

Competitive cognitive artifacts don’t augment our ability to reason but instead replace our ability to reason by competing with our own cognitive processes. Classic examples are the calculator or GPS navigation.

The interesting thing here is that complementary cognitive artifacts have imprinting and additional positive effects. For example, being proficient with maps increases spatial awareness. On the other hand, with competitive cognitive artifacts, you are probably worse off when the artifact is removed. For example, using GPS navigation systems degrades spatial awareness, so when it is removed, you are less capable than before.

I’m not arguing that we should destroy all calculators (or GPS navigation systems); I’m only pointing out the impacts of reduced cognitive function. It’s also interesting to consider that AI tools are almost universally competitive cognitive artifacts. We assume, wrongly, that there isn’t a cost to this augmentation. I mean, everything has tradeoffs in life. Technology is no different.

To avoid making this blog post a book a whole book, let’s look at memory.

Memory Storage

Most humans realize that memory is a limitation. Unless we are savants, there are only so many things we store in our heads. But we may be taking the offloading of memory too far. Let’s think about what we are actually doing. As humans, we are transitioning from knowing things to knowing where things are stored. We’ve treated this as universally beneficial without considering side effects.

We are transitioning from knowing things to knowing where things are stored

AI didn’t initiate this trend, but it has accelerated it, especially with systems like ChatGPT, which people use as oracles. This means the information we are retrieving may never have existed in biological memory in the first place and, more interestingly, may not be stored even after we retrieve it. Anyone who’s ever followed a YouTube tutorial on how to do something and, despite performing the task, had to review it again the next time can attest to this.

This brings up some interesting thought experiments. Is someone who doesn’t have any deep knowledge contained in their biological memory smart? After all, information on astrophysics is a search away. Would we say someone proficient at searching Google or prompting a language model is smart? Okay, let’s phrase the question a different way.

Is an average human + Google (or insert favorite AI tool here) smarter than Einstein or Von Neumann? After all, they have access to far more information far more quickly than either of those scientists ever did. Of course, the answer is no. We instinctively know there’s something more to knowledge and intelligence than merely knowing where data is stored or getting a summary from a document.

There’s no doubt that people may feel like Einstein, but that’s a topic for another day.

Human memory is getting worse, no doubt, due to technology. At the veterinary office I visit, I’ve seen people walk out of the exam room to use the restroom, go to the front desk, or go out to their car, and not remember which exam room they came out of. A clear degradation of spatial memory. These weren’t kids on TikTok or people staring down at their phones. People of all ages are represented.

But, not all memory tasks are straight lookup tasks, and memories spontaneously emerge. Sometimes, I bust out laughing when a memory pops into my head. This spontaneous surfacing has benefits, such as the creation of epiphanies and novel concepts creating a satisfaction that can’t be replicated with technology. What happens when this spontaneity disappears? Not only are we worse off, but it leads to more questions.

How do we develop novel ideas and concepts if we don’t have the right knowledge in our biological memory? It’s one thing to have knowledge and some novel concepts in memory and then explore external storage locations for further data. It’s another thing entirely to have no deep knowledge contained in biological memory and expect novelty to emerge because of access to external storage. I know the techno-utopians would say that we’ll build algorithms for this, but it’s a challenging problem and not the same thing and wouldn’t lead to the same results.

Humans + AI = Superhumans?

Human augmentation with AI is being sold as an intellectual get-rich-quick scheme, but the reality is gaining knowledge is hard. Sometimes, it is very hard, and there aren’t any shortcuts today, no matter how many prompts we create or documents we summarize. However, cognitive illusions are easy to come by. We end up fooling ourselves into thinking we know more than we do. Once again, AI didn’t start this trend. It’s merely the accelerant.

There’s a fundamental illusion clouding many people’s perceptions. Just as we can’t seem to view the past without the lens of the present, we can’t envision the future without using the same lens. We tend to assume we’ll keep our same faculties and gain more capabilities, resulting in some sort of win-win situation.

We mistakenly think human augmentation makes us superhuman, but in reality, it probably doesn’t. Despite knowing where information is stored and being able to perform some additional computational tasks, which may give us some superhuman capabilities in a few narrow areas, the reality is it may not make us superhuman overall and probably makes us worse. These additional capabilities will create very real and expanded blind spots and deficiencies. Of course, these won’t be identified until far too late, and everyone will claim not to have seen them coming.

These additional capabilities will create very real and expanded blind spots and deficiencies.

We haven’t even asked ourselves what we hope to get from this symbiosis or augmentation. There is just this generic sense of “enhancement,” but nothing overly specific. It’s one thing if the augmentation addresses some deficiency, such as reduced cognitive or motor function, but what are we addressing when a perfectly functioning human decides to augment themselves?

The reality is that when this symbiosis happens, we will become completely dependent on technology for far more than complex tasks; we will also be dependent upon it to function in our daily lives, even for simple tasks. This is because we will use the resources to offload even more cognitively, regardless of task complexity. Who wins in this scenario? Tech companies? Society? Us? At this point, will the technology still be working for us, or will we be working for the technology? More importantly, at what point do we stop being recognizable as humans?

Parting Thought

I’m not opposed to human augmentation or even being augmented in some way myself. But as an adult who has lived on planet Earth for a bit, I want to understand the tradeoffs. Understanding the costs is essential to determining whether the augmentation is worth it. It seems that in some cases, we may be stiffed with a hefty bill that we never would have agreed to ahead of time.

When it comes to being human, there are certain things we’d like to protect and certain things we are fine giving up. This will be different for each individual, but we all have this. These considerations will have to be part of our future decisions.

Our brains seek to free up resources and limit the amount of work they perform to create brain capacity for other tasks. In short, our brains seek to offload as much as possible. This is something we don’t consciously realize. It’s one of the reasons we prefer getting an answer to solving a problem. Our brains seek the offloading path, whether it’s helpful or not. This evolutionary quirk may have served us well in the past, but with technological advances, it may not serve us well in the future.

The movie Idiocracy is a cult classic that has been quoted more and more over the past few years. Here’s something to think about. It could be that Mike Judge got the future outcome of the movie’s setting right but just got the premise wrong. The only way the world of idiocracy could have come about is if highly capable AI had been in the background, making everything work and, of course, manufacturing Brawndo. Brawndo has electrolytes!

5 thoughts on “Challenging Posthumanist Narratives: Downsides of AI and Brain Augmentation

  1. Siri's avatar Siri says:

    Hi Nathan,

    Interesting piece! One thing I wanted to point out is that you may be considering ‘transhumanism’ which concerns the technological enhancement of humans overcoming our perceived mental and physical limitations, as the same thing as ‘posthumanism’, which is a conceptual rejection of humanism and humanist thought (people as the central consideration). I believe posthumanism is intended to decentralise the importance of humans compared to other life forms and systems, and explores how all such systems are interconnected!

    Just something to consider 🙂

    1. Hello,
      Thank you for reading and pointing this out. You’re right. I should have been clearer because I’m attempting to address both. Not so much the critical theory portion of posthumanism but specifically the technological aspects. For example, the way Andy Clarke uses the term in Natural Born Cyborgs or the discussions of human evolution to Homotechnologicus. It seems the term has been co-opted by the tech community keeping the human center and focusing on only the technological aspects to the point that posthumanism and transhumanism are now interchangeable in some circles. Sorry to be part of the problem stepping on formal definitions. Thanks again for the feedback and the discussion.

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading