Perilous Tech

Risks at the Intersection of Technology and Humanity

One constant throughout the generative AI craze is summarization. Why read a book, listen to a podcast, or YouTube video… Just summarize it! Large swaths of content, distilled into several bullet points with countless hours saved. However, this isn’t the utopia many claim.

We all love a good shortcut. Humans are wired for them. This is why we are so good at cognitive offloading, but the tradeoffs from shortcuts are never recognized or shoved deep into our subconscious. Every shortcut has tradeoffs. With generative AI, tradeoffs are never acknowledged or discussed. However, here’s an inconvenient truth: knowledge and understanding aren’t generated from bullet points.

Fake Optimization

Many of the claims made by influencers, transhumanists, and the e/acc community revolve around fake optimization. Fake optimization claims that something lowers friction for a task or activity while not providing the same value.

So many things in this world require friction for success, especially knowledge and understanding.

These people see everything as a game of lowering friction, but there’s just one problem. So many things require friction for success, especially knowledge and understanding. To go further, there are many activities where the friction of the activity is the point, such as art or meditation. However, telling people that won’t get clicks and someone’s “thought leader” badge may be revoked. So we end up with the environment we have today, with everyone from tech leaders to influencers telling people friction is about to be a thing of the past.

Take this example of promising people they don’t have to put in the work and still gain the benefit. Anyone claiming you can gain the same value from cramming three hours into three minutes demonstrates a fundamental lack of understanding of how knowledge transfer works and a near-religious level of faith in AI.

If we step back, people listen to content like podcasts for two different reasons: entertainment and information. Quite often, it’s a combination of both. So, by summarizing, we’ve removed all of the entertainment factor, immediately reducing the value of an activity. However, before we get too far, let’s examine a scenario that should be obvious to people.

Imagine summarizing a one-hour stand-up comedy performance. “Just tell me the best jokes.” Is that really an hour saved? Of course not. It won’t be funny, and anyone who thinks differently has been sitting behind a computer screen for too long. We instinctively know that comedy is situational and relies on context and delivery. Comedians like Mitch Hedberg prove this point.

The comedy scenario is obvious for most to understand. However, what’s difficult to understand is that a similar value loss also exists for non-entertainment activities. Summarization isn’t the shortcut people think it is. Without the surrounding context, we may not be committing these summaries to memory, where we can take action on them or put them to use.

Thinking Deeply

There’s no thinking deeply about bullet points or summaries. You can’t. This is because the action of summarizing strips away all of the context. For thinking deeply, the context is key. Summaries are just a set of condensed words shoved into a predetermined space. Important bits of information (sometimes the most important bits) are left out. There’s no way they can’t be.

There’s no connection to bullet points and summaries, no deeper meaning, emotion, or content to chew on mentally. Nobody contemplates something deeper or dreams about something bigger with summaries. The same can’t be said about reading a book or other longer-form content. The inherent dehumanization of summaries drives some of this lack of connection.

In summarization tasks like these, we take someone’s uniqueness, including their perspective, delivery, language, and flair, and crush the life out of it to get the resulting bullet points. This act results in a shift. Instead of viewing someone as a person, we view them as data or a product to be manipulated, and summaries strip humanity away, leaving us with several cold sentences generated from the compactor of a black box.

Make no mistake, the dehumanization aspect is a selling point for many. The human aspect is often seen as flawed, whereas the AI aspect appears superior. But this perspective doesn’t serve us well—you know… we humans—especially when it affects our ability to think deeply.

There can be rare exceptions where a quote or simple statement does cause some deep thought. For example, this quote is often attributed to Einstein, even though he never precisely said these words.

“If you can't explain it simply, you don't understand it well enough.”

A statement like this can trigger deeper thoughts about ourselves and our view of knowledge. As a theoretical, let’s pretend Einstein was on a podcast and uttered this statement, making a larger point about knowledge and understanding. Mediated through an AI system in a summarization task, this statement could be transformed into:

“You need to explain things simply.”

The difference between these two examples is stark, and they do not even remotely mean the same thing. There’s certainly nothing to think more deeply about in the second example.

The ability to think deeply about any topic is a skill we are losing fast and for younger generations, possibly never cultivating in the first place. Our modern world, filled with its distractions, is not only pulverizing our ability to ponder, to wonder, and to dream, but also to question.

The act of questioning requires effort and friction. It isn’t purely asking a question to an AI system and getting a response because the act of questioning isn’t easily satisfied. Don’t let people reframe them as equal. We will not be better off for it.

Context, Value, and Illusion

In reality, longer-form content can be bloated. I’ve read books that should have been four chapters and podcasts that could have been reduced to thirty minutes. However, it’s a mistake to consider context as bloat and an even bigger mistake to assume an LLM knows the difference. This is because you often can’t tell the difference until after the fact. Something that seems like bloat at the beginning is context in the end. That pointless story turns into a connection reinforcing a particular point.

It’s a mistake to consider context as bloat and an even bigger mistake to assume an AI knows the difference.

Let’s consider the importance of context for a moment. Consider something larger, such as a slide deck from a presentation. There are not just several but many bullet points along with images and diagrams. If you are already an expert on the topic, it may be possible (but not always) to glean something from the slide deck. However, the real value is the context in which the content was delivered and the commentary around it. Conversely, if you watched the presentation and had the context, the slides are helpful because they can reinforce the content and even jog your memory. This is true for all sorts of content.

You may be convinced (or not) by a set of built points or summaries, whereas hearing the whole argument would have proved otherwise. In life, we say it’s all about context, but context is what we discard when we summarize.

Also, even for general accuracy, the act of summarization strips away all of the supporting or disproving elements, leaving us with a couple of sentences that may or may not be important. Without the context, how do you know if a point is accurate? You have to blindly trust the system.

One of the most commonly encountered bits of summarization is survey results. Most people never dig into the details of surveys or studies, but this is where you find issues. These are problems with the approach, sample size, sample diversity, and many more pieces of context that may cast a shadow over the results, transforming those groundbreaking results into more questions than answers. Summarizing everything leads to many misunderstandings.

We spend little time evaluating the proposed value from summarization. We are told we can spend far less time yet gain a commensurate level of insight from summaries. This value proposition speaks to our modern low-attention-span world, but if we take a step back and consider the realities, it just doesn’t jibe for the reasons outlined in this article.

Much of this disconnection stems from a lack of presence. We need to exercise a certain amount of presence to read a book or join a meeting. However, this is becoming a lost skill. New technology promises we no longer need to be fully present again, but there are consequences in nearly all contexts. This is why the Illusion of Presence is one of my cognitive illusions created by personal AI personas.

Unfortunately, we do end up fooling ourselves. Using an AI to summarize content for knowledge gives us the illusion that we are working smarter and creating more knowledge with less effort, but as we’ve seen, that’s not the case. The reality is a world of summaries creates a world of fools.

A world of summaries creates a world of fools.

Although harsh, if we consider what we’ve already discussed, it makes sense. Not only are we not gaining the promised value from activities, but we also fool ourselves into believing we do.

AI Mediation

AI mediation is both a bug and a feature. What we want out of content may very well be in the dense center of some data blob. However, something must be said about getting all of our information mediated through an AI system. So much of our world is already mediated by algorithms, and we aren’t exactly better off for it. We are pushed and nudged in various directions, making us more predictable, with all of us shoved toward the dense center of a distribution. But what you don’t find there are uniqueness, creativity, or innovation. Sparks, inspiration, and innovation don’t come from bullet points, although you are certainly being sold on the opinion that it can.

Ultimately, we leave it up to an algorithm to determine the main points, the most important things we should pay attention to. A black box plucking data points with some higher purpose that nobody understands. Many times, the points being distilled may very well be the most important, but certainly not always, and without context, it’s impossible to tell.

Ultimately, we need to ask ourselves a question. How many filters do we want between us and reality? Using AI for mediation is yet another filter on top of reality. We should work to remove filters in places where the activities are important to us.

I’m not trying to overplay the dangers here. You certainly won’t be hurt by occasional summarization tasks with an AI system. However, when used often, there is not only a value mismatch, but it can also warp our understanding of reality. So, there are consequences.

Wasted Time, Not Optimization

The funny thing is we don’t even ask ourselves if the time spent is worth it. Let’s say we cut down on reading time to generate summaries instead. This way, we can cover more ground on more topics. Many may consider this a solid strategy. Subconsciously, this also feels right, which makes it a powerful argument. This is why influencers are so fooled by it. However, when we dig deeper, it’s not the benefit it seems.

So, in the three hours to three minutes optimization sale, you lose time. The three minutes are wasted because you never had the content reinforced with the surrounding context. It becomes bullet points scrawled across a mental billboard as you drive past at 120 mph. Of course, this assumes that the content distilled wasn’t so generic to be a waste in the first place.

Say, for instance, that we use AI to summarize Peter Attia’s book Outlive or possibly one of his podcast appearances. One of the summary bullets may be:

  • Put a larger emphasis on Zone 2 training.

Okay, but why? What is Zone 2 training? How do I do that? Answers to these questions were covered in the surrounding context, but now you spend extra time tracking down the answers.

Multiple people have already joked that we are on the cusp of someone writing something based on bullet points only to have the other person convert it back to bullet points. There’s something rather dystopian about this.

If something is worth learning, then it’s worth spending time on. This was true in the past and will be true in the future.

Conclusion

There are no shortcuts to creating knowledge. Knowledge generation always takes friction, but through this friction comes reward. When we take shortcuts, we deprive ourselves of the reward, leaving us with a hollow task that doesn’t provide the same value. Ultimately, nobody gets smart from bullet points.

I’m not claiming all summarization tasks are bad. They may be helpful and fine for task-based systems and under certain conditions. But they are not for generating knowledge and understanding. It’s becoming increasingly obvious that we must defend our cognitive functions because nobody else will.

One thought on “Knowledge and Understanding Aren’t Generated From Bullet Points

Leave a Reply

Discover more from Perilous Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading