AI-focused use cases applied to news delivery have picked up recently. It’s no secret why news-based use cases would be in the crosshairs of AI since it seems like a natural fit. News is text; LLMs do text, so why not let LLMs do the news? Boom. Obviously, if you’ve read about any of the many failures in applying AI to the news media space, you’ll know it’s not that easy.
There isn’t a shortage of problems in the news media space, either. Trust in the media remains near a record low, and reporting on every event is transformed into an editorial. So, it’s not like there aren’t real problems to address. Unfortunately, many of these aren’t technological problems.
Channel 1

Recently, something that caught my eye was Channel1.ai. Channel 1 doesn’t seem to solve any problems at all. In fact, it’s poised to create a few, that is, if it ever gets off the ground. Channel 1 bills itself as a “personalized” global news network powered by generative AI. Let’s dig in.
Channel 1 doesn’t seem to solve any problems at all.
Not Solving Problems
Looking at Channel 1’s offering, it’s hard to see any problems their solution addresses. There are still human editors and producers involved, as well as human fact-checkers. What they seem to be addressing is the pesky news anchor. Who knew that was the real problem in the media? I’m sure those media trust numbers are about to skyrocket.
But, Why?
It’s easy to look at Channel 1’s offering, scratch your head, and ask, “Why?” Like so many AI use cases these days, It appears to be nothing more than an attempt at a novelty. In an age where people are throwing spaghetti at the wall and seeing what sticks, this is yet another plate of spaghetti. However, the novelty wears off almost as quickly as it’s presented in our modern world.
There’s a current rush to put AI in everything, whether you want it or not, whether it’s necessary or not, and whether it solves a problem or not. Startups are counting on the fact that innovations can be elusive, and it’s not always obvious ahead of time. For example, many questioned why they would ever need to do anything other than talk on a cell phone. They are hoping you didn’t know you needed it. However, these use cases fall short of other successful, elusive innovations.
Creating Problems
Solutions like Channel 1 can potentially create more problems with news media delivery. Strangely, we can look at the world today and deduce that we can solve problems by creating even more filter bubbles, but that’s part of Channel 1’s pitch. The personalization of content down to the fake news reporter delivering it to you means that people can continue to live in their own highly customized bubble.
A glance at Channel 1’s description might lead people to believe one of the benefits is the ability to translate content into different languages in real time, but this isn’t the benefit that it seems. How do you check for translation issues in real-time? Beyond any real-time translation issues, there’s another problem with locality.
People are interested in international news stories but care about local news, which makes sense. These are stories affecting your community. How is Channel 1 going to verify all of these local stories, especially ones outside the United States and in languages other than English? Are they going to employ people in various regions throughout the world who natively speak these languages? Let me answer that for you, no. The human in the loop will be nothing more than a meat sack automaton pushing the publish button.
The human in the loop will be nothing more than a meat sack automaton pushing the publish button.
When there’s no footage of something, Channel 1 will create an AI-generated image to depict what it “thinks” the event would look like. Yikes.
Channel 1 will use AI to generate images and videos of events where "cameras were not able to capture the action." It likens this to how a courtroom sketch "is not a literal depiction of actual events" but helps audiences understand them.
Comparing an AI-generated image to a courtroom sketch is delusional, especially since a courtroom sketch is done by an artist who witnessed the events and often sketched them as they happen. This isn’t an AI making up things that look similar to an event. Even though these images are labeled as AI-generated, this is a terrible idea because it’s creating an image of reality that never existed.
News agencies often use b-roll footage and footage from other events in their news stories today. For example, using footage from a protest a year ago for a story of a current protest. I think this is a terrible practice that should be discontinued, and it is just one cog of many in the current collapse of trust in news media. We are partly to blame for this because we want more exciting and entertaining reporting than merely regurgitating the facts.
Getting It Wrong
Whether human or AI-based, misinformation making it into a seemingly legitimate news source is a recipe for disaster. I’ve pulled no punches in my criticism of the dangers of AI-generated misinformation and deepfakes. However, one of the ways misinformation can gain legitimacy is when it’s disseminated through legitimate news sources. This is why legitimate news organizations should be highly critical of AI use cases in their environments and understand that failures can have problematic impacts and further loss of confidence by the public.
As newsrooms shrink and resources become more scarce, the ability of news organizations to hold each other accountable becomes nonexistent.
Here is another thing to think about. As newsrooms shrink and resources become more scarce, the ability of news organizations to hold each other accountable becomes nonexistent. Many news sources have just become aggregators for other people’s content. In some cases, a single news story by a single reporter may get amplified and spread through countless other news sites. Modern news organizations don’t have the resources to verify truths on the ground, so they are just left repeating content from other reporters, who may not be acting in good faith. It’s another way misinformation can propagate and amplify. In this case, too, Channel 1 is contributing to the problem.
The Real Fake News
I think Channel 1 will fail and possibly not even launch. It may not launch because of technical issues and constraints. For example, their demo was pre-generated and not done in real time. So, there are technical hurdles they have to address, but their issues run deeper. Ultimately, I think Channel 1 will fail because of its delivery. It’s the real fake news.
When you first check out Channel 1’s demo, you are immediately taken by how lifelike the anchor’s appearance is. However, as with all of these technologies, applying even the slightest scrutiny highlights obvious issues. You then notice how the stiff, lifeless delivery is met with the inability to keep the mouth in sync. It becomes a distraction from the very point of the product. The more you watch, the more it feels… creepy.
Even though we are surrounded by fakery on a daily basis, we still overwhelmingly don’t like fake things, especially those that are supposed to seem real.
They Aren’t Max Headroom

These AI-generated human personas strive for visual perfection but forget something far more important. Visual perfection isn’t what attracts people to personas. If that were the case, cartoons wouldn’t be popular. The reality is that these companies strive for visual perfection because personality is either incredibly elusive or not possible.
Max Headroom’s jerky, glitchy presentation wasn’t something to be minimized; it was part of his persona. Of course, one thing he wasn’t short on was personality. We have all of this cutting-edge technology, yet back in the 80s, a person imitating an AI, imitating a person was still far more engaging. And, his lips were synced.
AI and The News

Will AI use cases assist news media? Perhaps, but it’s important to realize that big challenges in the current news media aren’t technological and fall more into the human and societal bucket, and prescribing tech to solve these issues hasn’t gone well in the past. I guess we’ll find out, because more is on the way in 2024.
